Refine
Year of publication
Document Type
- Working Paper (2350) (remove)
Language
- English (2350) (remove)
Has Fulltext
- yes (2350) (remove)
Is part of the Bibliography
- no (2350)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1376)
- Wirtschaftswissenschaften (1305)
- Sustainable Architecture for Finance in Europe (SAFE) (738)
- House of Finance (HoF) (604)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (148)
- Informatik (114)
- Foundation of Law and Finance (50)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
Most systematic discussion of dyad morphemes has focussed on Australian languages, owing to a combination of their relative prevalence there, and the development of a descriptive tradition that investigates them in some depth. In the course of researching this paper, however, I became aware of functionally and semantically similar morphemes in many other parts of the world, almost invariably described in isolation from any typological reference point. I have incorporated such data as far as I am aware of it, in the hope that a systematic study will encourage other investigators to identify, and investigate in detail, similar constructions in a range of languages. The current state of our research, however, as well as some interesting geographical skewings that I discuss below, such that outside Australia dyad constructions almost exclusively employ reciprocal morphology, means that most of this paper will focus on Australian languages.
This paper is the first to conduct an incentive-compatible experiment using real monetary payoffs to test the hypothesis of probabilistic insurance which states that willingness to pay for insurance decreases sharply in the presence of even small default probabilities as compared to a risk-free insurance contract. In our experiment, 181 participants state their willingness to pay for insurance contracts with different levels of default risk. We find that the willingness to pay sharply decreases with increasing default risk. Our results hence strongly support the hypothesis of probabilistic insurance. Furthermore, we study the impact of customer reaction to default risk on an insurer’s optimal solvency level using our experimentally obtained data on insurance demand. We show that an insurer should choose to be default-free rather than having even a very small default probability. This risk strategy is also optimal when assuming substantial transaction costs for risk management activities undertaken to achieve the maximum solvency level.
Broad, long-term financial and economic datasets are a scarce resource, in particular in the European context. In this paper, we present an approach for an extensible, i.e. adaptable to future changes in technologies and sources, data model that may constitute a basis for digitized and structured long- term, historical datasets. The data model covers specific peculiarities of historical financial and economic data and is flexible enough to reach out for data of different types (quantitative as well as qualitative) from different historical sources, hence achieving extensibility. Furthermore, based on historical German company and stock market data, we discuss a relational implementation of this approach.
We study the behavioral underpinnings of adopting cash versus electronic payments in retail transactions. A novel theoretical and experimental framework is developed to primarily assess the impact of sellers’ service fees and buyers’ rewards from using electronic payments. Buyers and sellers face a coordination problem, independently choosing a payment method before trading. In the experiment, sellers readily adopt electronic payments but buyers do not. Eliminating service fees or introducing rewards significantly boosts the adoption of electronic payments. Hence, buyers’ incentives play a pivotal role in the diffusion of electronic payments but monetary incentives cannot fully explain their adoption choices. Findings from this experiment complement empirical findings based on surveys and field data.
On 14 September 2016, the European Commission proposed a Directive on “copyright in the Digital Single Market”. This proposal includes an Article 11 on the “protection of press publications concerning digital uses”, according to which “Member States shall provide publishers of press publications with the rights provided for in Article 2 and Article 3(2) of Directive 2001/29/EC for the digital use of their press publications.” Relying on the experiences and debates surrounding the German and Spanish laws in this area, this study presents a legal analysis of the proposal for an EU related right for press publishers (RRPP). After a brief overview over the general limits of the EU competence to introduce such a new related right, the study critically examines the purpose of an RRPP. On this basis, the next section distinguishes three versions of an RRPP with regard to its subject-matter and scope, and considers the practical and legal implications of these alternatives, in particular having regard to fundamental rights.
We consider unification of terms under the equational theory of two-sided distributivity D with the axioms x*(y+z) = x*y + x*z and (x+y)*z = x*z + y*z. The main result of this paper is that Dunification is decidable by giving a non-deterministic transformation algorithm. The generated unification are: an AC1-problem with linear constant restrictions and a second-order unification problem that can be transformed into a word-unification problem that can be decided using Makanin's algorithm. This solves an open problem in the field of unification. Furthermore it is shown that the word-problem can be decided in polynomial time, hence D-matching is NP-complete.
We show how Sestoft’s abstract machine for lazy evaluation of purely functional programs can be extended to evaluate expressions of the calculus CHF – a process calculus that models Concurrent Haskell extended by imperative and implicit futures. The abstract machine is modularly constructed by first adding monadic IO-actions to the machine and then in a second step we add concurrency. Our main result is that the abstract machine coincides with the original operational semantics of CHF, w.r.t. may- and should-convergence.
Spacially dispersed transnational professional communities can be perceived of as cultural formations living in a global frame of reference, transgressing existing political and cultural boundaries. In their capacity as members of local technical and knowledgebased elites, they take part in circulating and connecting cultural meanings that are both locally produced, and continuously re-working non- local flows. I argue that those elites can be described as actors at cultural interfaces, taking part in shaping and mediating social change. The aim is twofold: one, to point to mutually opposed tendencies, and ambivalences in the framework of a „culture of change“, and two, to look into the question how such situations and groups can be methodologically approached.
We relate time-varying aggregate ambiguity (V-VSTOXX) to individual investor trading. We use the trading records of more than 100,000 individual investors from a large German online brokerage from March 2010 to December 2015. We find that an increase in ambiguity is associated with increased investor activity. It also leads to a reduction in risk-taking which does not reverse over the following days. When ambiguity is high, the effect of sentiment looms larger. Survey evidence reveals that ambiguity averse investors are more prone to ambiguity shocks. Our results are robust to alternative survey-, newspaper- or market-based ambiguity measures.
Motivated by tools for automaed deduction on functional programming languages and programs, we propose a formalism to symbolically represent $\alpha$-renamings for meta-expressions. The formalism is an extension of usual higher-order meta-syntax which allows to $\alpha$-rename all valid ground instances of a meta-expression to fulfill the distinct variable convention. The renaming mechanism may be helpful for several reasoning tasks in deduction systems. We present our approach for a meta-language which uses higher-order abstract syntax and a meta-notation for recursive let-bindings, contexts, and environments. It is used in the LRSX Tool -- a tool to reason on the correctness of program transformations in higher-order program calculi with respect to their operational semantics. Besides introducing a formalism to represent symbolic $\alpha$-renamings, we present and analyze algorithms for simplification of $\alpha$-renamings, matching, rewriting, and checking $\alpha$-equivalence of symbolically $\alpha$-renamed meta-expressions.
In this study we investigate which economic ideas were prevalent in the macroprudential discourse post-crises in order to understand the availability of ideas for reform minded agents. We base our analysis on new findings in the field of ideational shifts and regulatory science, which posit that change-agents engage with new ideas pragmatically and strategically in their effort to have their economic ideas institutionalized. We argue that in these epistemic battles over new regulation, scientific backing by academia is the key resource determining the outcome. We show that the present reforms implemented internationally follow this pattern. In our analysis we contrast the entire discourse on systemic risk and macroprudential regulation with Borio’s initial 2003 proposal for a macroprudential framework. We find that mostly cross-sectional measures targeted towards increasing the resilience of the financial system rather than inter-temporal measures dampening the financial cycle have been implemented. We provide evidence for the lacking support of new macroprudential thinking within academia and argue that this is partially responsible for the lack of anti-cyclical macroprudential regulation. Most worryingly, the financial cycle is largely absent in the academic discourse and is only tacitly assumed instead of fully fledged out in technocratic discourses, pointing to the possibility that no anti-cyclical measures will be forthcoming.
Algorithmic trading engines versus human traders – do they behave different in securities markets?
(2009)
After exchanges and alternative trading venues have introduced electronic execution mechanisms worldwide, the focus of the securities trading industry shifted to the use of fully electronic trading engines by banks, brokers and their institutional customers. These Algorithmic Trading engines enable order submissions without human intervention based on quantitative models applying historical and real-time market data. Although there is a widespread discussion on the pros and cons of Algorithmic Trading and on its impact on market volatility and market quality, little is known on how algorithms actually place their orders in the market and whether and in which respect this differs form other order submissions. Based on a dataset that – for the first time – includes a specific flag to enable the identification of orders submitted by Algorithmic Trading engines, the paper investigates the extent of Algorithmic Trading activity and specifically their order placement strategies in comparison to human traders in the Xetra trading system. It is shown that Algorithmic Trading has become a relevant part of overall market activity and that Algorithmic Trading engines fundamentally differ from human traders in their order submission, modification and deletion behavior as they exploit real-time market data and latest market movements.
Projected demographic changes in industrialized and developing countries vary in extent and timing but will reduce the share of the population in working age everywhere. Conventional wisdom suggests that this will increase capital intensity with falling rates of return to capital and increasing wages. This decreases welfare for middle aged asset rich households. This paper takes the perspective of the three demographically oldest European nations — France, Germany and Italy — to address three important adjustment channels to dampen these detrimental effects of aging in these countries: investing abroad, endogenous human capital formation and increasing the retirement age. Our quantitative finding is that endogenous human capital formation in combination with an increase in the retirement age has strong implications for economic aggregates and welfare, in particular in the open economy. These adjustments reduce the maximum welfare losses of demographic change for households alive in 2010 by about 2.2 percentage points in terms of a consumption equivalent variation.
The importance of agile methods has increased in recent years, not only to manage software development processes but also to establish flexible and adaptive organisational structures, which are essential to deal with disruptive changes and build successful digital business strategies. This paper takes an industry-specific perspective by analysing the dissemination, objectives and relative popularity of agile frameworks in the German banking sector. The data provides insights into expectations and experiences associated with agile methods and indicates possible implementation hurdles and success factors. Our research provides the first comprehensive analysis of agile methods in the German banking sector. The comparison with a selected number of fintechs has revealed some differences between banks and fintechs. We found that almost all banks and fintechs apply agile methods in IT-related projects. However, fintechs have relatively more experience with agile methods than banks and use them more intensively. Scrum is the most relevant framework used in practice. Scaled agile frameworks are so far negligible in the German banking sector. Acceleration of projects is apparently the most important objective of deploying agile methods. In addition, agile methods can contribute to cost savings and lead to improved quality and innovation performance, though for banks it is evidently more challenging to reach their respective targets than for fintechs. Overall our findings suggest that German banks are still in a maturing process of becoming more agile and that there is room for an accelerated adoption of agile methods in general and scaled agile frameworks in particular.
We analyze the macroeconomic implications of increasing the top marginal income tax rate using a dynamic general equilibrium framework with heterogeneous agents and a fiscal structure resembling the actual U.S. tax system. The wealth and income distributions generated by our model replicate the empirical ones. In two policy experiments, we increase the statutory top marginal tax rate from 35 to 70 percent and redistribute the additional tax revenue among households, either by decreasing all other marginal tax rates or by paying out a lump-sum transfer to all households. We find that increasing the top marginal tax rate decreases inequality in both wealth and income but also leads to a contraction of the aggregate economy. This is primarily driven by the negative effects that the tax change has on top income earners. The aggregate gain in welfare is sizable in both experiments mainly due to a higher degree of distributional equality.
We analyze the macroeconomic implications of increasing the top marginal income tax rate using a dynamic general equilibrium framework with heterogeneous agents and a fiscal structure resembling the actual U.S. tax system. The wealth and income distributions generated by our model replicate the empirical ones. In two policy experiments, we increase the statutory top marginal tax rate from 35 to 70 percent and redistribute the additional tax revenue among households, either by decreasing all other marginal tax rates or by paying out a lump-sum transfer to all households. We find that increasing the top marginal tax rate decreases inequality in both wealth and income but also leads to a contraction of the aggregate economy. This is primarily driven by the negative effects that the tax change has on top income earners. The aggregate gain in welfare is sizable in both experiments mainly due to a higher degree of distributional equality.
We investigate consumption patterns in Europe with supervised machine learning methods and reveal differences in age and wealth impact across countries. Using data from the third wave (2017) of the Eurosystem’s Household Finance and Consumption Survey (HFCS), we assess how age and (liquid) wealth affect the marginal propensity to consume (MPC) in the Netherlands, Germany, France, and Italy. Our regression analysis takes the specification by Christelis et al. (2019) as a starting point. Decision trees are used to suggest alternative variable splits to create categorical variables for customized regression specifications. The results suggest an impact of differing wealth distributions and retirement systems across the studied Eurozone members and are relevant to European policy makers due to joint Eurozone monetary policy and increasing supranational fiscal authority of the EU. The analysis is further substantiated by a supervised machine learning analysis using a random forest and XGBoost algorithm.
What processes transform (im)mobile individuals into ‘migrants’ and geographic movements across political-territorial borders into ‘migration’? To address this question, the article develops the doing migration approach, which combines perspectives from social constructivism, praxeology and the sociologies of knowledge and culture. ‘Doing migration’ starts with the processes of social attribution that differentiate between ‘migrants’ and ‘non-migrants’. Embedded in institutional, organizational and interactional routines these attributions generate unique social orders of migration. By illustrating these conceptual ideas, the article provides insights into the elements of the contemporary European order of ‘migration’. Its institutional routines contribute to the emergence of a European migration regime that involves narratives of economization, securitization and humanitarization. The organizational routines of the European migration order involve surveillance and diversity management, which have disciplining effects on those defined as ‘migrants’. The routines of everyday face-to-face interactions produce various micro-forms of doing ‘migration’ through stigmatization and othering, but they also provide opportunities to resist a social attribution as ‘migrant’.
Event studies have become increasingly important in securities fraud litigation after the Supreme Court’s decision in Halliburton II. Litigants have used event study methodology, which empirically analyzes the relationship between the disclosure of corporate information and the issuer’s stock price, to provide evidence in the evaluation of key elements of federal securities fraud, including materiality, reliance, causation, and damages. As the use of event studies grows and they increasingly serve a gatekeeping function in determining whether litigation will proceed beyond a preliminary stage, it will be critical for courts to use them correctly.
This Article explores an array of considerations related to the use of event studies in securities fraud litigation. It starts by describing the basic function of the event study: to determine whether a highly unusual price movement has occurred and the traditional statistical approach to making that determination. The Article goes on to identify special features of securities fraud litigation that distinguish litigation from the scholarly context in which event studies were developed. The Article highlights the fact that the standard approach can lead to the wrong conclusion and describes the adjustments necessary to address the litigation context. We use the example of six dates in the Halliburton litigation to illustrate these points.
Finally, the Article highlights the limitations of event studies – what they can and cannot prove – and explains how those limitations relate to the legal issues for which they are introduced. These limitations bear upon important normative questions about the role event studies should play in securities fraud litigation.
Advertising arbitrage
(2014)
Speculators often advertise arbitrage opportunities in order to persuade other investors and thus accelerate the correction of mispricing. We show that in order to minimize the risk and the cost of arbitrage an investor who identifies several mispriced assets optimally advertises only one of them, and overweights it in his portfolio; a risk-neutral arbitrageur invests only in this asset. The choice of the asset to be advertised depends not only on mispricing but also on its "advertisability" and accuracy of future news about it. When several arbitrageurs identify the same arbitrage opportunities, their decisions are strategic complements: they invest in the same asset and advertise it. Then, multiple equilibria may arise, some of which inefficient: arbitrageurs may correct small mispricings while failing to eliminate large ones. Finally, prices react more strongly to the ads of arbitrageurs with a successful track record, and reputation-building induces high-skill arbitrageurs to advertise more than others.
Advertising arbitrage
(2020)
Arbitrageurs with a short investment horizon gain from accelerating price discovery by advertising their private information. However, advertising many assets may overload investors' attention, reducing the number of informed traders per asset and slowing price discovery. So arbitrageurs optimally concentrate advertising on just a few assets, which they overweight in their portfolios. Unlike classic insiders, advertisers prefer assets with the least noise trading. If several arbitrageurs share information about the same assets, inefficient equilibria can arise, where investors' attention is overloaded and substantial mispricing persists. When they do not share, the overloading of investors' attention is maximal.
According to the present state of research, there seems to be no language which shows possessive classifiers and possessive verbs corresponding to English "to have" at the same time. In classifier languages predicative possession is expressed by verbless clauses, i.e. by existential clauses ("there is my possessed item"), equative clauses ("the possessed item is mine" "that is my possessed item") or by locative expressions ("the possessed item is near me"), in which the classifier in the case of non-inherent possession marks the nature of the relationship. While most Melanesian languages, as for instance Fijian, Lenakel, Pala and Tolai are classifier languages, Nguna, a Melanesian language spoken in Vanuatu, only shows traces of the Melanesian possessive classifier system, but, in contrast to the other Melanesian languages, it has a possessive verb, namely 'peani' "to have". In order to show how the Nguna possessive constructions deviate from the common Melanesian type, we shall start with a brief description of the Melanesian possessive constructions in general, and that of Fijian in particular.
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and must-convergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extension.
Since 2015, 90 taxa of lichens and 18 lichenicolous fungi have been recorded from Turkey for the first time. Further 707 taxa are new to one or more provinces. In this paper 2 species are new to Turkey. A list of 82 published papers is also provided as a supplement to the bibliography of the 2017 Checklist (John & Türk (2017) of Turkish Lichens.
This paper explores consequences of consumer education on prices and welfare in retail financial markets when some consumers are naive about shrouded add-on prices and firms try to exploit it. Allowing for different information and pricing strategies we show that education is unlikely to push firms to disclose prices towards all consumers, which would be socially efficient. Instead, price discrimination emerges as a new equilibrium. Further, due to a feedback on prices, education that is good for consumers who become sophisticated may be bad for consumers who stay naive and even for the group of all consumers as a whole
This study examines the role of actual and perceived financial sophistication (i.e., financial literacy and confidence) for individuals' wealth accumulation. Using survey data from the German SAVE initiative, we find strong gender- and education-related differences in the distribution of the two variables and their effects on wealth: As financial literacy rises in formal education, whereas confidence increases in education for men but decreases for women, we observe that women become strongly underconfident with higher education, while men remain overconfident.Regarding wealth accumulation, we show that financial literacy has a positive effect that is stronger for women than for men and that is increasing (decreasing) in education for women (men). Confidence, however, supports only highly-educated men's wealth. When considering different channels for wealth accumulation, we observe that financial literacy is more important for current financial market participation, whereas confidence is more strongly associated with future-oriented financial planning. Overall, we demonstrate that highly-educated men's wealth levels benefit from their overconfidence via all financial decisions considered, but highly-educated women's financial planning suffers from their underconfidence. This may impair their wealth levels in old age.
A number of recent studies have suggested that activist stabilization policy rules responding to inflation and the output gap can attain simultaneously a low and stable rate of inflation as well as a high degree of economic stability. The foremost example of such a strategy is the policy rule proposed by Taylor (1993). In this paper, I demonstrate that the policy settings that would have been suggested by this rule during the 1970s, based on real-time data published by the U.S. Commerce Department, do not greatly differ from actual policy during this period. To the extent macroeconomic outcomes during this period are considered unfavorable, this raises questions regarding the usefulness of this strategy for monetary policy. To the extent the Taylor rule is believed to provide a reasonable guide to monetary policy, this finding raises questions regarding earlier critiques of monetary policy during the 1970s.
Acquisition of aspect
(2003)
Acquiring foreign firms far away might be hazardous to your share price: evidence from Germany
(2007)
This paper examines shareholder wealth effects of cross-border acquisitions. In a sample of 155 large acquisitions by German corporations from 1985–2006 international transactions in total do not lead to significant announcement returns. Geography, however, makes a difference: Shareholders of acquiring firms gain 6.5% in cross-border transactions into countries that have a common border with Germany but lose 4.4% in other international transactions. We find proximity to be one of the most important success factors in cross-border mergers and acquisitions, even when we control for firm, deal and country characteristics.
A resampling method based on the bootstrap and a bias-correction step is developed for improving the Value-at-Risk (VaR) forecasting ability of the normal-GARCH model. Compared to the use of more sophisticated GARCH models, the new method is fast, easy to implement, numerically reliable, and, except for having to choose a window length L for the bias-correction step, fully data driven. The results for several different financial asset returns over a long out-of-sample forecasting period, as well as use of simulated data, strongly support use of the new method, and the performance is not sensitive to the choice of L. Klassifizierung: C22, C53, C63, G12
This chapter outlines the conditions under which accounting-based smoothing can be beneficial for policyholders who hold with-profit or participating payout life annuities (PLAs). We use a realistically-calibrated model of PLAs to explore how alternative accounting techniques influence policyholder welfare as well as insurer profitability and stability. We find that accounting smoothing of participating life annuities is favorable to consumers and insurers, as it mitigates the impact of short-term volatility and enhances the utility of these long-term annuity contracts.
Accounting for financial stability: Bank disclosure and loss recognition in the financial crisis
(2020)
This paper examines banks’ disclosures and loss recognition in the financial crisis and identifies several core issues for the link between accounting and financial stability. Our analysis suggests that, going into the financial crisis, banks’ disclosures about relevant risk exposures were relatively sparse. Such disclosures came later after major concerns about banks’ exposures had arisen in markets. Similarly, the recognition of loan losses was relatively slow and delayed relative to prevailing market expectations. Among the possible explanations for this evidence, our analysis suggests that banks’ reporting incentives played a key role, which has important implications for bank supervision and the new expected loss model for loan accounting. We also provide evidence that shielding regulatory capital from accounting losses through prudential filters can dampen banks’ incentives for corrective actions. Overall, our analysis reveals several important challenges if accounting and financial reporting are to contribute to financial stability.
This paper investigates what we can learn from the financial crisis about the link between accounting and financial stability. The picture that emerges ten years after the crisis is substantially different from the picture that dominated the accounting debate during and shortly after the crisis. Widespread claims about the role of fair-value (or mark-to-market) accounting in the crisis have been debunked. However, we identify several other core issues for the link between accounting and financial stability. Our analysis suggests that, going into the financial crisis, banks’ disclosures about relevant risk exposures were relatively sparse. Such disclosures came later after major concerns about banks’ exposures had arisen in markets. Similarly, banks delayed the recognition of loan losses. Banks’ incentives seem to drive this evidence, suggesting that reporting discretion and enforcement deserve careful consideration. In addition, bank regulation through its interlinkage with financial accounting may have dampened banks’ incentives for corrective actions. Our analysis illustrates that a number of serious challenges remain if accounting and financial reporting are to contribute to financial stability.
Accounting for financial instruments in the banking industry: conclusions from a simulation model
(2003)
The paper analyses the effects of three sets of accounting rules for financial instruments - Old IAS before IAS 39 became effective, Current IAS or US GAAP, and the Full Fair Value (FFV) model proposed by the Joint Working Group (JWG) - on the financial statements of banks. We develop a simulation model that captures the essential characteristics of a modern universal bank with investment banking and commercial banking activities. We run simulations for different strategies (fully hedged, partially hedged) using historical data from periods with rising and falling interest rates. We show that under Old IAS a fully hedged bank can portray its zero economic earnings in its financial statements. As Old IAS offer much discretion, this bank may also present income that is either positive or negative. We further show that because of the restrictive hedge accounting rules, banks cannot adequately portray their best practice risk management activities under Current IAS or US GAAP. We demonstrate that - contrary to assertions from the banking industry - mandatory FFV accounting adequately reflects the economics of banking activities. Our detailed analysis identifies, in addition, several critical issues of the accounting models that have not been covered in previous literature.
Recent changes in accounting regulation for financial instruments (SFAS 133, IAS 39) have been heavily criticized by representatives from the banking industry. They argue for retaining a historical cost based "mixed model" where accounting for financial instruments depends on their designation to either trading or nontrading activities. In order to demonstrate the impact of different accounting models for financial instruments on the financial statements of banks, we develop a bank simulation model capturing the essential characteristics of a modern universal bank with investment banking and commercial banking activities. In our simulations we look at different scenarios with periods of increasing/decreasing interest rates using historical data and with different banking strategies (fully hedged; partially hedged). The financial statements of our model bank are prepared under different accounting rules ("Old" IAS before implementation of IAS 39; current IAS) with and without hedge accounting as offered by the respective sets of rules. The paper identifies critical issues of applying the different accounting rules for financial instruments to the activities of a universal bank. It demonstrates important shortcomings of the "Old" IAS rules (before IAS 39), and of the current IAS rules. Under the current IAS rules the results of a fully hedged bank may have to show volatility in income statements due to changes in market interest rates. Accounting results of a partially hedged bank in the same scenario may be less affected even though there are economic gains or losses.
Returns to experience for U.S. workers have changed over the post-war period. This paper argues that a simple model goes a long way towards replicating these changes. The model features three well-known ingredients: (i) an aggregate production function with constant skill-biased technical change; (ii) cohort qualities that vary with average years of schooling; and crucially (iii) time-invariant age-efficiency profiles. The model quantitatively accounts for changes in longitudinal and cross-sectional returns to experience, as well as the differential evolution of the college wage premium for young and old workers.
From the late middle ages to early modern times (ca. 1200-1600) the Lübeck City Council was the most important courthouse in the Baltic. About 100 cities and towns on its shores lived according to the law of Lübeck. The paper deals with the old theory that Imperial law, i.e. mainly the learned Ius commune, was generally rejected by the council on the grounds of its foreign nature. The paper rejects this view with the help of 8 case studies. There exist rather spectacular statements against Imperial Law, but a closer look reveals that they have to be seen in the light of a specific practical context. They must not be confounded with general statements in which the council had no interest. Its attitude towards Learned Law was flexible and purely pragmatic.
It is my intention to make two major points in this paper: 1. The first has to do with finding a frame within which the modal expressions of one particular Ancient IE [Indoeuropean] language – I have chosen Classical Greek – can be best described. I shall try to point out that the regularities which we find in these expressions must depend on an underlying principle, represented by abstract structures. These structures are semanto-syntactic, which means that the semantic properties or bundles of properties are arranged not in a linear order but in a hierarchical order, analogous to a bracketing in a PS structure. The abstract structures we propose have, of course, a very tentative character. They can only be accepted as far as evidence for them can be furnished. 2. My second point has to do with the modal verb forms that were the object of the studies of most Indo-Europeanists. If in the innermost bracket of a semanto-syntactic structure two semantic properties or bundles of properties can be exchanged without any further change in the total structure, and if this change is correlated with a change in verbal mood forms and nothing else, then I think we are faced with a case where these forms can be said to have a meaning of their own. I shall also try to show how these meanings are to be understood as bundles of features rather than as unanalyzed terms. In my final remarks: I shall try to outline the bearing these views have on comparative IE linguistics.
We develop a model that endogenizes the manager's choice of firm risk and of inside debt investment strategy. Our model delivers two predictions. First, managers have an incentive to reduce the correlation between inside debt and company stock in bad times. Second, managers that reduce such a correlation take on more risk in bad times. Using a sample of U.S. public firms, we provide evidence consistent with the model's predictions. Our results suggest that the weaker link between inside debt and company stock in bad times does not translate into a mitigation of debt-equity conflicts.
The long-run consumption risk model provides a theoretically appealing explanation for prominent asset pricing puzzles, but its intricate structure presents a challenge for econometric analysis. This paper proposes a two-step indirect inference approach that disentangles the estimation of the model's macroeconomic dynamics and the investor's preference parameters. A Monte Carlo study explores the feasibility and efficiency of the estimation strategy. We apply the method to recent U.S. data and provide a critical re-assessment of the long-run risk model's ability to reconcile the real economy and financial markets. This two-step indirect inference approach is potentially useful for the econometric analysis of other prominent consumption-based asset pricing models that are equally difficult to estimate.
We model the motives for residents of a country to hold foreign assets, including the precautionary motive that has been omitted from much previous literature as intractable. Our model captures many of the principal insights from the existing specialized literature on the precautionary motive, deriving a convenient formula for the economy’s target value of assets. The target is the level of assets that balances impatience, prudence, risk, intertemporal substitution, and the rate of return. We use the model to shed light on two topical questions: The “upstream” flows of capital from developing countries to advanced countries, and the long-run impact of resorbing global financial imbalances
We present a tractable model of the effects of nonfinancial risk on intertemporal choice. Our purpose is to provide a simple framework that can be adopted in fields like representative-agent macroeconomics, corporate finance, or political economy, where most modelers have chosen not to incorporate serious nonfinancial risk because available methods were too complex to yield transparent insights. Our model produces an intuitive analytical formula for target assets, and we show how to analyze transition dynamics using a familiar Ramsey-style phase diagram. Despite its starkness, our model captures most of the key implications of nonfinancial risk for intertemporal choice.
A theory of the boundaries of banks with implications for financial integration and regulation
(2015)
We offer a theory of the "boundary of the
rm" that is tailored to banking, as it builds on a single ine¢ ciency arising from risk-shifting and as it takes into account both interbank lending as an alternative to integration and the role of possibly insured deposit funding. Amongst others, it explains both why deeper economic integration should cause also greater financial integration through both bank mergers and interbank lending, albeit this typically remains ine¢ ciently incomplete, and why economic disintegration (or "desychronization"), as currently witnessed in the European Union, should cause less interbank exposure. It also suggests that recent policy measures such as the preferential treatment of retail deposits, the extension of deposit insurance, or penalties on "connectedness" could all lead to substantial welfare losses.
The well-known proof of termination of reduction in simply typed calculi is adapted to a monomorphically typed lambda-calculus with case and constructors and recursive data types. The proof differs at several places from the standard proof. Perhaps it is useful and can be extended also to more complex calculi.
A tale of one exchange and two order books : effects of fragmentation in the absence of competition
(2018)
Exchanges nowadays routinely operate multiple, almost identically structured limit order markets for the same security. We study the effects of such fragmentation on market performance using a dynamic model where agents trade strategically across two identically-organized limit order books. We show that fragmented markets, in equilibrium, offer higher welfare to intermediaries at the expense of investors with intrinsic trading motives, and lower liquidity than consolidated markets. Consistent with our theory, we document improvements in liquidity and lower profits for liquidity providers when Euronext, in 2009, consolidated its order ow for stocks traded across two country-specific and identically-organized order books into a single order book. Our results suggest that competition in market design, not fragmentation, drives previously documented improvements in market quality when new trading venues emerge; in the absence of such competition, market fragmentation is harmful.
Did the Federal Reserves’ Quantitative Easing (QE) in the aftermath of the financial crisis have macroeconomic effects? To answer this question, the authors estimate a large-scale DSGE model over the sample from 1998 to 2020, including data of the Fed’s balance sheet. The authors allow for QE to affect the economy via multiple channels that arise from several financial frictions. Their nonlinear Bayesian likelihood approach fully accounts for the zero lower bound on nominal interest rates. They find that between 2009 to 2015, QE increased output by about 1.2 percent. This reflects a net increase in investment of nearly 9 percent, that was accompanied by a 0.7 percent drop in aggregate consumption. Both, government bond and capital asset purchases were effective in improving financing conditions. Especially capital asset purchases significantly facilitated new investment and increased the production capacity. Against the backdrop of a fall in consumption, supply side effects dominated which led to a mild disinflationary effect of about 0.25 percent annually.
A stochastic forward-looking model to assess the profitability and solvency of european insurers
(2016)
In this paper, we develop an analytical framework for conducting forward-looking assessments of profitability and solvency of the main euro area insurance sectors. We model the balance sheet of an insurance company encompassing both life and non-life business and we calibrate it using country level data to make it representative of the major euro area insurance markets. Then, we project this representative balance sheet forward under stochastic capital markets, stochastic mortality developments and stochastic claims. The model highlights the potential threats to insurers solvency and profitability stemming from a sustained period of low interest rates particularly in those markets which are largely exposed to reinvestment risks due to the relatively high guarantees and generous profit participation schemes. The model also proves how the resilience of insurers to adverse financial developments heavily depends on the diversification of their business mix. Finally, the model identifies potential negative spillovers between life and non-life business thorugh the redistribution of capital within groups.
A stochastic forward-looking model to assess the profitability and solvency of European insurers
(2016)
In this paper, we develop an analytical framework for conducting forward-looking assessments of profitability and solvency of the main euro area insurance sectors. We model the balance sheet of an insurance company encompassing both life and non-life business and we calibrate it using country level data to make it representative of the major euro area insurance markets. Then, we project this representative balance sheet forward under stochastic capital markets, stochastic mortality developments and stochastic claims. The model highlights the potential threats to insurers solvency and profitability stemming from a sustained period of low interest rates particularly in those markets which are largely exposed to reinvestment risks due to the relatively high guarantees and generous profit participation schemes. The model also proves how the resilience of insurers to adverse financial developments heavily depends on the diversification of their business mix. Finally, the model identifies potential negative spillovers between life and non-life business thorugh the redistribution of capital within groups.
In this paper we estimate a small model of the euro area to be used as a laboratory for evaluating the performance of alternative monetary policy strategies. We start with the relationship between output and inflation and investigate the fit of the nominal wage contracting model due to Taylor (1980)and three different versions of the relative real wage contracting model proposed by Buiter and Jewitt (1981)and estimated by Fuhrer and Moore (1995a) for the United States. While Fuhrer and Moore reject the nominal contracting model in favor of the relative contracting model which induces more inflation persistence, we find that both models fit euro area data reasonably well. When considering France, Germany and Italy separately, however, we find that the nominal contracting model fits German data better, while the relative contracting model does quite well in countries which transitioned out of a high inflation regime such as France and Italy. We close the model by estimating an aggregate demand relationship and investigate the consequences of the different wage contracting specifications for the inflation-output variability tradeoff, when interest rates are set according to Taylor 's rule.
A safe core mandate
(2023)
Central banks have vastly expanded their footprint on capital markets. At a time of extraordinary pressure by many sides, a simple benchmark for the scale and scope of their core mandate of price and financial stability may be useful.
We make a case for a narrow mandate to maintain and safeguard the border between safe and quasi safe assets. This ex-ante definition minimizes ambiguity and discourages risk creation and limit panic runs, primarily by separating market demand for reliable liquidity from risk-intolerant, price-insensitive demand for a safe store of value. The central bank may be occasionally forced to intervene beyond the safe core but should not be bound by any such ex-ante mandate, unless directed to specific goals set by legislation with explicit fiscal support.
We review distinct features of liquidity and safety demand, seeking a definition of the safety border, and discuss LOLR support for borderline safe assets such as MMF or uninsured deposits.
A safe core formulation is close to the historical focus on regulated entities, collateralized lending and attention to the public debt market, but its specific framing offers some context on controversial issues such as the extent of LOLR responsibilities. It also justifies a persistently large scale for central bank liabilities (Greenwood, Hansom and Stein 2016), as safety demand is related to financial wealth rather than GDP. Finally, it is consistent with an active central bank role in supporting liquidity in government debt markets trading and clearing (Duffie 2020, 2021).
This paper was presented at the workshop “Goods, Languages, and Cultures along the Silk Road” at Goethe University Frankfurt am Main, October 18 and 19, 2019. While many contributions to the workshop focused on recent developments in China’s current “New Silk Road” politics, on forms of communication, and on contemporary exchange of goods and ideas across so-called Silk Road countries in the Caucasus and Central Asia and with China, this short essay focuses on the history of the so-called Silk Road as an important transport connection. Although what is now called the “Silk Road” was not a pure East-West binary in antiquity but rather developed into a network that also led to the South and North, the focus here will be on describing the East-West connection.
I will start with a few brief remarks on the origins of the connection referred to as the Silk Road and will then introduce the different great empires that shaped this connection between antiquity and the Middle Ages through military campaigns and by using it as a trading route and network. But the Silk Road was by no means only of economic and military importance. Its significance for the exchange and dissemination of religions should also be mentioned. This paper does not detail the importance of the numerous individual religions in the area of the Silk Road but discusses the phenomenon of the spread of religions and the loss of some of their own distinguishing characteristics in this spread, a phenomenon that could be described as a “unity of opposites” (coincidentia oppositorum). Finally, the essay asks who, in the face of the regular replacement of powers, held sovereignty over the transport connection: the subject (in the form of the empires) or the object (in the form of the road).
Who were the main protagonists of and along the Silk Road in the course of history? Who were the people who became the great powers of the ancient Silk Road, building up the material route, governing parts of it, and organizing trade and relationships from the far East to the extreme West of the Eurasian continent?
This study examines the recent literature on the expectations, beliefs and perceptions of investors who incorporate Environmental, Social, Governance (ESG) considerations in investment decisions with the aim to generate superior performance and also make a societal impact. Through the lens of equilibrium models of agents with heterogeneous tastes for ESG investments, green assets are expected to generate lower returns in the long run than their non- ESG counterparts. However, at the short run, ESG investment can outperform non-ESG investment through various channels. Empirically, results of ESG outperformance are mixed. We find consensus in the literature that some investors have ESG preference and that their actions can generate positive social impact. The shift towards more sustainable policies in firms is motivated by the increased market values and the lower cost of capital of green firms driven by investors’ choices.
This paper analyzes how on-the-job search (OJS) by an agent impacts the moral hazard problem in a repeated principal-agent relationship. OJS is found to constitute a source of agency costs because efficient search incentives require that the agent receives all gains from trade. Further, the optimal incentive contract with OJS matches the design of empirically observed compensation contracts more accurately than models that ignore OJS. In particular, the optimal contract entails excessive performance pay plus efficiency wages. Efficiency wages reduce the opportunity costs of work effort and hence serve as a complement to bonuses. Thus, the model offers a novel explanation for the use of efficiency wages. When allowing for renegotiation, the model generates wage and turnover dynamics that are consistent with empirical evidence. I argue that the model contributes to explaining the concomitant rise in the use of performance pay and in competition for high-skill workers during the last three decades.
High-frequency changes in interest rates around FOMC announcements are an important tool for identifying the effects of monetary policy on asset prices and the macroeconomy. However, some recent studies have questioned both the exogeneity and the relevance of these monetary policy surprises as instruments, especially for estimating the macroeconomic effects of monetary policy shocks. For example, monetary policy surprises are correlated with macroeconomic and financial data that is publicly available prior to the FOMC announcement. The authors address these concerns in two ways: First, they expand the set of monetary policy announcements to include speeches by the Fed Chair, which essentially doubles the number and importance of announcements in our dataset. Second, they explain the predictability of the monetary policy surprises in terms of the “Fed response to news” channel of Bauer and Swanson (2021) and account for it by orthogonalizing the surprises with respect to macroeconomic and financial data. Their subsequent reassessment of the effects of monetary policy yields two key results: First, estimates of the high-frequency effects on financial markets are largely unchanged. Second, estimates of the macroeconomic effects of monetary policy are substantially larger and more significant than what most previous empirical studies have found.
This paper examines the interaction of G7 real exchange rates with real output and interest rate differentials. Using cointegration methods, we generally find a link between the real exchange rate and the real interest differential. This finding contrasts with the majority of the extant research on the real exchange rate - real interest rate link. We identify a new measure of the equilibrium exchange rate in terms of the permanent component of the real exchange rate that is consistent with the dynamic equilibrium given by the cointegration relation. Furthermore, the presence of cointegration also allows us to identify real, nominal and transitory disturbances with only minimal identifying restrictions. Our findings suggest that persistent deviations of real exchange rates from their equilibrium value can have feedback effects on the underlying fundamentals, hence altering the equilibrium exchange rate itself. This has important implications for the persistence measures of real exchange rates that are reported elsewhere in the literature.
A question of Mesorah?
(2009)
In the upcoming Krias Hatorah in Parshat Shoftim and Parshat Ki Savo there are a number of instances where the meaning of a phrase changes completely based on the pronunciation of a single word – םד – with either a Komatz or Patah. Until recently, most Chumashim and Tikunim which generally followed the famous Yaakov Ben Hayyim 1525 edition of Mikraot Gedolot published in Venice that printed a seemingly inconsistent pattern in the pronunciation of the different occurrences of this word.
Using a novel dataset, we develop a structural model of the Very Large Crude Carrier (VLCC) market between the Arabian Gulf and the Far East. We study how fluctuations in oil tanker rates, oil exports, shipowner profits, and bunker fuel prices are determined by shocks to the supply and demand for oil tankers, to the utilization of tankers, and to the cost of operating tankers, including bunker fuel costs. Our analysis shows that time charter rates are largely unresponsive to tanker cost shocks. In response to higher costs, voyage profits decline, as cost shocks are only partially passed on to round-trip voyage rates. Oil exports from the Arabian Gulf also decline, reflecting lower demand for VLCCs. Positive utilization shocks are associated with higher profits, a slight increase in time charter rates and lower fuel prices and oil export volumes. Tanker supply and tanker demand shocks have persistent effects on time charter rates, round-trip voyage rates, the volume of oil exports, fuel prices, and profits with the expected sign.
The nineteenth century in Britain saw tumultuous changes that reshaped the fabric of society and altered the course of modernization. It also saw the rise of the novel to the height of its cultural power as the most important literary form of the period. This paper reports on a long-term experiment in tracing such macroscopic changes in the novel during this crucial period. Specifically, we present findings on two interrelated transformations in novelistic language that reveal a systemic concretization in language and fundamental change in the social spaces of the novel. We show how these shifts have consequences for setting, characterization, and narration as well as implications for the responsiveness of the novel to the dramatic changes in British society.
This paper has a second strand as well. This project was simultaneously an experiment in developing quantitative and computational methods for tracing changes in literary language. We wanted to see how far quantifiable features such as word usage could be pushed toward the investigation of literary history. Could we leverage quantitative methods in ways that respect the nuance and complexity we value in the humanities? To this end, we present a second set of results, the techniques and methodological lessons gained in the course of designing and running this project.
Under a conventional policy rule, a central bank adjusts its policy rate linearly according to the gap between inflation and its target, and the gap between output and its potential. Under "the opportunistic approach to disinflation" a central bank controls inflation aggressively when inflation is far from its target, but concentrates more on output stabilization when inflation is close to its target, allowing supply shocks and unforeseen fluctuations in aggregate demand to move inflation within a certain band. We use stochastic simulations of a small-scale rational expectations model to contrast the behavior of output and inflation under opportunistic and linear rules. Klassifikation: E31, E52, E58, E61. July, 2005.
We raise some critical points against a naïve interpretation of “green finance” products and strategies. These critical insights are the background against which we take a closer look at instruments and policies that might allow green finance to become more impactful. In particular, we focus on the role of a taxonomy and investor activism. We also describe the interaction of government policies with green finance practice – an aspect, which has been mostly neglected in policy debates but needs to be taken into account. Finally, the special case of green government bonds is discussed.
We raise some critical points against a naïve interpretation of “green finance” products and strategies. These critical insights are the background against which we take a closer look at instruments and policies that might allow green finance to become more impactful. In particular, we focus on the role of a taxonomy and investor activism. We also describe the interaction of government policies with green finance practice – an aspect, which has been mostly neglected in policy debates but needs to be taken into account. Finally, the special case of green government bonds is discussed.
We create an alternative version of the present utility value formula to explicitly show that every store-of-value in the economy bears utility-interest (non-pecuniary income) for ist holder regardless of possible interest earnings from financial markets. In addition, we generalize the well-known welfare measures of consumer and producer surplus as present value concepts and apply them not only for the production and usage of consumer goods and durables but also for money and other financial assets. This helps us, inter alia, to formalize the circumstances under which even a producer of legal tender might become insolvent. We also develop a new measure of seigniorage and demonstrate why the well-established concept of monetary seigniorage is flawed. Our framework also allows us to formulate the conditions for liability-issued money such as inside money and financial instruments such as debt certificates to become – somewhat paradoxically – net wealth of the society.
[I]n its present form, the bibliography contains approximately 1100 entries. Bibliographical work is never complete, and the present one is still modest in a number of respects. It is not annotated, and it still contains a lot of mistakes and inconsistencies. It has nevertheless reached a stage which justifies considering the possibility of making it available to the public. The first step towards this is its pre-publication in the form of this working paper. […]
The bibliography is less complete for earlier years. For works before 1970, the bibliographies of Firbas and Golkova 1975 and Tyl 1970 may be consulted, which have not been included here.
The Russian war of aggression against Ukraine since 24 February 2022 has intensified the discussion of Europe’s reliance on energy imports from Russia. A ban on Russian imports of oil, natural gas and coal has already been imposed by the United States, while the United Kingdom plans to cease imports of oil and coal from Russia by the end of 2022. The German Federal Government is currently opposing an energy embargo against Russia. However, the Federal Ministry for Economic Affairs and Climate Action is working on a strategy to reduce energy imports from Russia. In this paper, the authors give an overview of the German and European reliance on energy imports from Russia with a focus on gas imports and discuss price effects, alternative suppliers of natural gas, and the potential for saving and replacing natural gas. They also provide an overview of estimates of the consequences on the economic outlook if the conflict intensifies.
In this paper we consider the dynamics of spot and futures prices in the presence of arbitrage. We propose a partially linear error correction model where the adjustment coefficient is allowed to depend non-linearly on the lagged price difference. We estimate our model using data on the DAX index and the DAX futures contract. We find that the adjustment is indeed nonlinear. The linear alternative is rejected. The speed of price adjustment is increasing almost monotonically with the magnitude of the price difference.
A partial rehabilitation of side-effecting I/O : non-determinism in non-strict functional languages
(1996)
We investigate the extension of non-strict functional languages like Haskell or Clean by a non-deterministic interaction with the external world. Using call-by-need and a natural semantics which describes the reduction of graphs, this can be done such that the Church-Rosser Theorems 1 and 2 hold. Our operational semantics is a base to recognise which particular equivalencies are preserved by program transformations. The amount of sequentialisation may be smaller than that enforced by other approaches and the programming style is closer to the common one of side-effecting programming. However, not all program transformations used by an optimising compiler for Haskell remain correct in all contexts. Our result can be interpreted as a possibility to extend current I/O-mechanism by non-deterministic deterministic memoryless function calls. For example, this permits a call to a random number generator. Adding memoryless function calls to monadic I/O is possible and has a potential to extend the Haskell I/O-system.
We build a novel leading indicator (LI) for the EU industrial production (IP). Differently from previous studies, the technique developed in this paper is able to produce an ex-ante LI that is immune to “overlapping information drawbacks”. In addition, the set of variables composing the LI relies on a dynamic and systematic criterion. This ensures that the choice of the variables is not driven by subjective views. Our LI anticipates swings (including the 2007-2008 crisis) in the EU industrial production – on average – by 2 to 3 months. The predictive power improves if the indicator is revised every five or ten years. In a forward-looking framework, via a general-to-specific procedure, we also show that our LI represents the most informative variable in approaching expectations on the EU IP growth.
Riley (1979)'s reactive equilibrium concept addresses problems of equilibrium existence in competitive markets with adverse selection. The game-theoretic interpretation of the reactive equilibrium concept in Engers and Fernandez (1987) yields the Rothschild-Stiglitz (1976)/Riley (1979) allocation as an equilibrium allocation, however multiplicity of equilibrium emerges. In this note we imbed the reactive equilibrium's logic in a dynamic market context with active consumers. We show that the Riley/Rothschild-Stiglitz contracts constitute the unique equilibrium allocation in any pure strategy subgame perfect Nash equilibrium.
This note argues that in a situation of an inelastic natural gas supply a restrictive monetary policy in the euro zone could reduce the energy bill and therefore has additional merits. A more hawkish monetary policy may be able to indirectly use monopsony power on the gas market. The welfare benefits of such a policy are diluted to the extent that some of the supply (approximately 10 percent) comes from within the euro zone, which may give rise to distributional concerns.
The Box-Cox quantile regression model using the two stage method introduced by Chamberlain (1994) and Buchinsky (1995) provides an attractive extension of linear quantile regression techniques. However, a major numerical problem exists when implementing this method which has not been addressed so far in the literature. We suggest a simple solution modifying the estimator slightly. This modification is easy to implement. The modified estimator is still [square root] n-consistent and its asymptotic distribution can easily be derived. A simulation study confirms that the modified estimator works well.
We extend the important idea of range-based volatility estimation to the multivariate case. In particular, we propose a range-based covariance estimator that is motivated by financial economic considerations (the absence of arbitrage), in addition to statistical considerations. We show that, unlike other univariate and multivariate volatility estimators, the range-based estimator is highly efficient yet robust to market microstructure noise arising from bid-ask bounce and asynchronous trading. Finally, we provide an empirical example illustrating the value of the high-frequency sample path information contained in the range-based estimates in a multivariate GARCH framework.
We develop a utility based model of fluctuations, with nominal rigidities, and unemployment. In doing so, we combine two strands of research: the New Keynesian model with its focus on nominal rigidities, and the Diamond-Mortensen-Pissarides model, with its focus on labor market frictions and unemployment. In developing this model, we proceed in two steps. We first leave nominal rigidities aside. We show that, under a standard utility specification, productivity shocks have no effect on unemployment in the constrained efficient allocation. We then focus on the implications of alternative real wage setting mechanisms for fluctuations in unemployment. We then introduce nominal rigidities in the form of staggered price setting by firms. We derive the relation between inflation and unemployment and discuss how it is influenced by the presence of real wage rigidities. We show the nature of the tradeoff between inflation and unemployment stabilization, and we draw the implications for optimal monetary policy. JEL Classification: E32, E50
A new governance architecture for european financial markets? Towards a european supervision of CCPs
(2018)
Does the new European outlook on financial markets, as voiced by the EU Commission since the beginning of the Capital Market Unions imply a movement of the EU towards an alignment of market integration and direct supervision of common rules? This paper sets out to answer this question for the case of common supervision for Central Counterparties (CCPs) in the European Union. Those entities gained crucial importance post-crisis due to new regulation which requires the mandatory clearing of standardized derivative contracts, transforming clearing houses into central nodes for cross-border financial transactions. While the EU-wide regulatory framework EMIR, enacted in 2012, stipulates common regulatory requirements, the framework still relies on home-country supervision of those rules, arguably leading to regulatory as well as supervisory arbitrage. Therefore, the regulatory reform to stabilize the OTC derivatives market replicated at its center a governance flaw, which had been identified as one of the major causes for the gravity of the financial crisis in the EU: the coupling of intense competition based on private risk management systems with a national supervision of European rules. This paper traces the history of this problem awareness and inquires which factors account for the fact that only in 2017 serious negotiations at the EU level ensued that envisioned a common supervision of CCPs to fix the flawed system of governance. Analyzing this shift in the European governance architecture, we argue that Brexit has opened a window of opportunity for a centralization of supervision for CCPs. Brexit aligns the urgency of the problem with material interests of crucial political stakeholder, in particular of Germany and France, providing the possibility for a grand European bargain.
One of the dangers of harmonisation and unification processes taking place within the framework of the EU is that they may result in the codification of the lowest common denominator. This is precisely what is threatening to happen in respect of assignment. Referring the transfer of receivables by way of assignment to the law of the assignor’s residence, as article 13 of the Proposal does, would be opting for the most conservative solution and would for many Member States be a step backward rather than forward. A conflict rule referring assignment to the law of the assignor's residence is too rigid to do justice to the dynamic nature of assignments in cross-border transactions and it is unjustly one-sided. It offers no real advantages when compared to other conflict rules; it even has serious disadvantages which make the conflict rule unsuitable for efficient assignment-based cross-border transactions. It is not unconceivable that this conflict rule would even be contrary to the fundamental freedoms of the ECTreaty. The Community legislators in particular should be careful not to needlessly adopt rules which create insurmountable obstacles for cross-border business where choice-of-law by the parties would perfectly do. Community legislation has a special responsibility to create a smooth legal environment for single market transactions.
In the aftermath of the global financial crisis, the state of macroeconomic modeling and the use of macroeconomic models in policy analysis has come under heavy criticism. Macroeconomists in academia and policy institutions have been blamed for relying too much on a particular class of macroeconomic models. This paper proposes a comparative approach to macroeconomic policy analysis that is open to competing modeling paradigms. Macroeconomic model comparison projects have helped produce some very influential insights such as the Taylor rule. However, they have been infrequent and costly, because they require the input of many teams of researchers and multiple meetings to obtain a limited set of comparative findings. This paper provides a new approach that enables individual researchers to conduct model comparisons easily, frequently, at low cost and on a large scale. Using this approach a model archive is built that includes many well-known empirically estimated models that may be used for quantitative analysis of monetary and fiscal stabilization policies. A computational platform is created that allows straightforward comparisons of models’ implications. Its application is illustrated by comparing different monetary and fiscal policies across selected models. Researchers can easily include new models in the data base and compare the effects of novel extensions to established benchmarks thereby fostering a comparative instead of insular approach to model development.
In the aftermath of the global financial crisis, the state of macroeconomicmodeling and the use of macroeconomic models in policy analysis has come under heavy criticism. Macroeconomists in academia and policy institutions have been blamed for relying too much on a particular class of macroeconomic models. This paper proposes a comparative approach to macroeconomic policy analysis that is open to competing modeling paradigms. Macroeconomic model comparison projects have helped produce some very influential insights such as the Taylor rule. However, they have been infrequent and costly, because they require the input of many teams of researchers and multiple meetings to obtain a limited set of comparative findings. This paper provides a new approach that enables individual researchers to conduct model comparisons easily, frequently, at low cost and on a large scale. Using this approach a model archive is built that includes many well-known empirically estimated models that may be used for quantitative analysis of monetary and fiscal stabilization policies. A computational platform is created that allows straightforward comparisons of models’ implications. Its application is illustrated by comparing different monetary and fiscal policies across selected models. Researchers can easily include new models in the data base and compare the effects of novel extensions to established benchmarks thereby fostering a comparative instead of insular approach to model development
Central banks have faced a succession of crises over the past years as well as a number of structural factors such as a transition to a greener economy, demographic developments, digitalisation and possibly increased onshoring. These suggest that the future inflation environment will be different from the one we know. Thus uncertainty about important macroeconomic variables and, in particular, inflation dynamics will likely remain high.
This paper reviews social network analysis (SNA) as a method to be utilized in biographical research which is a novel contribution. We argue that applying SNA in the context of biography research through standardized data collection as well as visualization of networks can open up participants’ interpretations of relations throughout their lives, and allow a creative and innovative way of data collection that is responsive to participants’ own meanings and associations while allowing the researchers to conduct systematical data analysis. The paper discusses the analytical potential of SNA in biographical research, where the efficacy of this method is critically discussed, together with its limitations, and its potential within the context of biographical research.
We present an empirical study focusing on the estimation of a fundamental multi-factor model for a universe of European stocks. Following the approach of the BARRA model, we have adopted a cross-sectional methodology. The proportion of explained variance ranges from 7.3% to 66.3% in the weekly regressions with a mean of 32.9%. For the individual factors we give the percentage of the weeks when they yielded statistically significant influence on stock returns. The best explanatory power – apart from the dominant country factors – was found among the statistical constructs „success“ and „variability in markets“.
We focus on the role of social media as a high-frequency, unfiltered mass information transmission channel and how its use for government communication affects the aggregate stock markets. To measure this effect, we concentrate on one of the most prominent Twitter users, the 45th President of the United States, Donald J. Trump. We analyze around 1,400 of his tweets related to the US economy and classify them by topic and textual sentiment using machine learning algorithms. We investigate whether the tweets contain relevant information for financial markets, i.e. whether they affect market returns, volatility, and trading volumes. Using high-frequency data, we find that Trump’s tweets are most often a reaction to pre-existing market trends and therefore do not provide material new information that would influence prices or trading. We show that past market information can help predict Trump’s decision to tweet about the economy.
This paper solves a dynamic model of households' mortgage decisions incorporating labor income, house price, inflation, and interest rate risk. It uses a zero-profit condition for mortgage lenders to solve for equilibrium mortgage rates given borrower characteristics and optimal decisions. The model quantifies the effects of adjustable vs. fixed mortgage rates, loan-to-value ratios, and mortgage affordability measures on mortgage premia and default. Heterogeneity in borrowers' labor income risk is important for explaining the higher default rates on adjustable-rate mortgages during the recent US housing downturn, and the variation in mortgage premia with the level of interest rates.
The Inuit inhabit a vast area of--from a European point of view--most inhospitable land, stretching from the northeastern tip of Asia to the east coast of Greenland. Inuit peoples have never been numerous, their settlements being scattered over enormous distances. But nevertheless, from an ethnological point of view, all Inuit peoples shared a distinct culture, featuring sea mammal and caribou hunting, sophisticated survival skills, technical and social devices, including the sharing of essential goods and strategies for minimizing and controlling aggression.
On average, "young" people underestimate whereas "old" people overestimate their chances to survive into the future. We adopt a Bayesian learning model of ambiguous survival beliefs which replicates these patterns. The model is embedded within a non-expected utility model of life-cycle consumption and saving. Our analysis shows that agents with ambiguous survival beliefs (i) save less than originally planned, (ii) exhibit undersaving at younger ages, and (iii) hold larger amounts of assets in old age than their rational expectations counterparts who correctly assess their survival probabilities. Our ambiguity-driven model therefore simultaneously accounts for three important empirical findings on household saving behavior.
Based on a cognitive notion of neo-additive capacities reflecting likelihood insensitivity with respect to survival chances, we construct a Choquet Bayesian learning model over the life-cycle that generates a motivational notion of neo-additive survival beliefs expressing ambiguity attitudes. We embed these neo-additive survival beliefs as decision weights in a Choquet expected utility life-cycle consumption model and calibrate it with data on subjective survival beliefs from the Health and Retirement Study. Our quantitative analysis shows that agents with calibrated neo-additive survival beliefs (i) save less than originally planned, (ii) exhibit undersaving at younger ages, and (iii) hold larger amounts of assets in old age than their rational expectations counterparts who correctly assess their survival chances. Our neo-additive life-cycle model can therefore simultaneously accommodate three important empirical findings on household saving behavior.
We consider an imperfectly competitive loan market in which a local relationship lender has an information advantage vis-à-vis distant transaction lenders. Competitive pressure from the transaction lenders prevents the local lender from extracting the full surplus from projects, so that she inefficiently rejects marginally profitable projects. Collateral mitigates the inefficiency by increasing the local lender’s payoff from precisely those marginal projects that she inefficiently rejects. The model predicts that, controlling for observable borrower risk, collateralized loans are more likely to default ex post, which is consistent with the empirical evidence. The model also predicts that borrowers for whom local lenders have a relatively smaller information advantage face higher collateral requirements, and that technological innovations that narrow the information advantage of local lenders, such as small business credit scoring, lead to a greater use of collateral in lending relationships. JEL classification: D82; G21 Keywords: Collateral; Soft infomation; Loan market competition; Relationship lending
As part of the Next Generation EU (NGEU) program, the European Commission has pledged to issue up to EUR 250 billion of the NGEU bonds as green bonds, in order to confirm their commitment to sustainable finance and to support the transition towards a greener Europe. Thereby, the EU is not only entering the green bond market, but also set to become one of the biggest green bond issuers. Consequently, financial market participants are eager to know what to expect from the EU as a new green bond issuer and whether a negative green bond premium, a so-called Greenium, can be expected for the NGEU green bonds. This research paper formulates an expectation in regards to a potential Greenium for the NGEU green bonds, by conducting an interview with 15 sustainable finance experts and analyzing the public green bond market from September 2014 until June 2021, with respect to a potential green bond premium and its underlying drivers. The regression results confirm the existence of a significant Greenium (-0.7 bps) in the public green bond market and that the Greenium increases for supranational issuers with AAA rating, such as the EU. Moreover, the green bond premium is influenced by issuer sector and credit rating, but issue size and modified duration have no significant effect. Overall, the evaluated expert interviews and regression analysis lead to an expected Greenium for the NGEU green bonds of up to -4 bps, with the potential to further increase in the secondary market.
We examine how U.S. monetary policy affects the international activities of U.S. Banks. We access a rarely studied US bank‐level dataset to assess at a quarterly frequency how changes in the U.S. Federal funds rate (before the crisis) and quantitative easing (after the onset of the crisis) affects changes in cross‐border claims by U.S. banks across countries, maturities and sectors, and also affects changes in claims by their foreign affiliates. We find robust evidence consistent with the existence of a potent global bank lending channel. In response to changes in U.S. monetary conditions, U.S. banks strongly adjust their cross‐border claims in both the pre and post‐crisis period. However, we also find that U.S. bank affiliate claims respond mainly to host country monetary conditions.
Futures markets are a potentially valuable source of information about market expectations. Exploiting this information has proved difficult in practice, because the presence of a time-varying risk premium often renders the futures price a poor measure of the market expectation of the price of the underlying asset. Even though the expectation in principle may be recovered by adjusting the futures price by the estimated risk premium, a common problem in applied work is that there are as many measures of market expectations as there are estimates of the risk premium. We propose a general solution to this problem that allows us to uniquely pin down the best possible estimate of the market expectation for any set of risk premium estimates. We illustrate this approach by solving the long-standing problem of how to recover the market expectation of the price of crude oil. We provide a new measure of oil price expectations that is considerably more accurate than the alternatives and more economically plausible. We discuss implications of our analysis for the estimation of economic models of energy-intensive durables, for the debate on speculation in oil markets, and for oil price forecasting.
The human mind may produce prototypization within virtually any realm of cognition and behavior. A "comparative prototype-typology" might prove to be an interesting field of study – perhaps a new subfield of semiotics. This, however, would presuppose a clear view on the samenesses and differences of prototypization in these various fields. It seems realistic for the time being that the linguist first confine himself to describing prototypization within the realm of language proper. The literature on prototypes has steadily grown in the past ten years or so. I confine myself to mentioning the volume on Noun Classes and Categorization, edited by C. Craig (1986), which contains a wealth of factual information on the subject, along with some theoretical vistas. By and large, however, linguistic prototype research is still basically in a taxonomic stage - which, of course, represents the precondition for moving beyond. The procedure is largely per ostensionem, and by accumulating examples of prototypes. We still lack a comprehensive prototype theory. The following pages are intended, not to provide such, a theory, but to do the first steps in this direction. Section 2 will feature some elements of a functional theory of prototypes. They have been developed by this author within the frame of the UNITYP model of research on language universals and typology. Section 3 will bring a discussion of prototypization with regard to selected phenomena of a wide range of levels of analysis: Phonology, morphosyntax, speech acts, and the lexicon. Prototypization will finally be studied within one of the universal dimensions, that of APPREHENSION - the linguistic representation of the concepts of objects – as proposed by Seiler (1986).