Refine
Year of publication
Document Type
- Working Paper (2354) (remove)
Language
- English (2354) (remove)
Has Fulltext
- yes (2354) (remove)
Is part of the Bibliography
- no (2354)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1380)
- Wirtschaftswissenschaften (1309)
- Sustainable Architecture for Finance in Europe (SAFE) (742)
- House of Finance (HoF) (608)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (149)
- Informatik (114)
- Foundation of Law and Finance (51)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
Context unification is a variant of second order unification. It can also be seen as a generalization of string unification to tree unification. Currently it is not known whether context unification is decidable. A specialization of context unification is stratified context unification, which is decidable. However, the previous algorithm has a very bad worst case complexity. Recently it turned out that stratified context unification is equivalent to satisfiability of one-step rewrite constraints. This paper contains an optimized algorithm for strati ed context unification exploiting sharing and power expressions. We prove that the complexity is determined mainly by the maximal depth of SO-cycles. Two observations are used: i. For every ambiguous SO-cycle, there is a context variable that can be instantiated with a ground context of main depth O(c*d), where c is the number of context variables and d is the depth of the SO-cycle. ii. the exponent of periodicity is O(2 pi ), which means it has an O(n)sized representation. From a practical point of view, these observations allow us to conclude that the unification algorithm is well-behaved, if the maximal depth of SO-cycles does not grow too large.
In this exploratory article, we consider the future of Deutsche Bank and Commerzbank and develop a new approach to the topic: instead of a merger of DB and CB we propose to consider a partial merger of the IT and related back office functions in order to create the basis for an Open Banking platform in Germany. Such a platform would act as a cross-institutional infrastructure company in which the participating banks develop a common data and IT platform (while respecting the data protection regulations). Significant parts of the transaction processes would be pooled by the institutions and executed by the Open Banking platform. Moreover, the institutions remain legally independent and compete with each other at the level of products and services that are developed and produced using just this common data and IT platform – “national champions” would not be created.
But such an “Open Banking Platform” could become even the nucleus of a European Banking platform that could be competitive with existing global data platforms from the USA and China which are already offering financial services and are likely to expand their offerings in the foreseeable future. The proposed model of an open data platform for banks prevents the emergence of national champions and supports the main goal of the banking union: creation of a financial system, in which single banks can be resolved without provoking a systemic crisis and forcing taxpayers to finance bailouts.
Compliance with prevailing accounting standards is induced if the expected disadvantage due to sanctions imposed if non-compliance is detected outweighs the advantage of noncompliant accounting choices. The expected disadvantage materialises the threat potential of sanctions imposed by an enforcement agency. The capital market mechanism unfolds an important threat potential if companies expect an adverse share price reaction suite to enforcement actions. Enforcement agencies in turn can make use of this capital market related sanction by releasing information on defections to the market after the settlement of an investigation. The present contribution analyses the capital market reaction on accounting standards enforcement activities of the British Financial Reporting Review Panel (FRRP). After a brief introduction into the legal basis and working procedure of the Panel, the analysis of its activities will serve a dual purpose: firstly, the significance of capital market related sanctions for the overall enforcement regime will be elaborated upon. Secondly, the extent to which capital market related sanctions accomplish their function within the overall enforcement regime will be assessed empirically. The results of the empirical analysis suggest that the capital market related sanctioning by the FRRP may not unfold a sufficient threat potential which is a prerequisite for compliance enhancement.
The corporate convergence debate is usually presented in terms of competing efficiency and political claims. Convergence optimists assert that an economic logic will promote convergence on the most efficient form of economic organization, usually taken to be the public corporation governed under rules designed to maximize shareholder value. Convergence skeptics counterclaim that organizational diversity is possible, even probable, because of path dependent development of institutional complementarities whose abandonment is likely to be inefficient. The skeptics also assert that existing elites will use their political and economic advantages to block reform; the optimists counterclaim that the spread of shareholding will reshape politics.
Most systematic discussion of dyad morphemes has focussed on Australian languages, owing to a combination of their relative prevalence there, and the development of a descriptive tradition that investigates them in some depth. In the course of researching this paper, however, I became aware of functionally and semantically similar morphemes in many other parts of the world, almost invariably described in isolation from any typological reference point. I have incorporated such data as far as I am aware of it, in the hope that a systematic study will encourage other investigators to identify, and investigate in detail, similar constructions in a range of languages. The current state of our research, however, as well as some interesting geographical skewings that I discuss below, such that outside Australia dyad constructions almost exclusively employ reciprocal morphology, means that most of this paper will focus on Australian languages.
This paper is the first to conduct an incentive-compatible experiment using real monetary payoffs to test the hypothesis of probabilistic insurance which states that willingness to pay for insurance decreases sharply in the presence of even small default probabilities as compared to a risk-free insurance contract. In our experiment, 181 participants state their willingness to pay for insurance contracts with different levels of default risk. We find that the willingness to pay sharply decreases with increasing default risk. Our results hence strongly support the hypothesis of probabilistic insurance. Furthermore, we study the impact of customer reaction to default risk on an insurer’s optimal solvency level using our experimentally obtained data on insurance demand. We show that an insurer should choose to be default-free rather than having even a very small default probability. This risk strategy is also optimal when assuming substantial transaction costs for risk management activities undertaken to achieve the maximum solvency level.
Broad, long-term financial and economic datasets are a scarce resource, in particular in the European context. In this paper, we present an approach for an extensible, i.e. adaptable to future changes in technologies and sources, data model that may constitute a basis for digitized and structured long- term, historical datasets. The data model covers specific peculiarities of historical financial and economic data and is flexible enough to reach out for data of different types (quantitative as well as qualitative) from different historical sources, hence achieving extensibility. Furthermore, based on historical German company and stock market data, we discuss a relational implementation of this approach.
We study the behavioral underpinnings of adopting cash versus electronic payments in retail transactions. A novel theoretical and experimental framework is developed to primarily assess the impact of sellers’ service fees and buyers’ rewards from using electronic payments. Buyers and sellers face a coordination problem, independently choosing a payment method before trading. In the experiment, sellers readily adopt electronic payments but buyers do not. Eliminating service fees or introducing rewards significantly boosts the adoption of electronic payments. Hence, buyers’ incentives play a pivotal role in the diffusion of electronic payments but monetary incentives cannot fully explain their adoption choices. Findings from this experiment complement empirical findings based on surveys and field data.
On 14 September 2016, the European Commission proposed a Directive on “copyright in the Digital Single Market”. This proposal includes an Article 11 on the “protection of press publications concerning digital uses”, according to which “Member States shall provide publishers of press publications with the rights provided for in Article 2 and Article 3(2) of Directive 2001/29/EC for the digital use of their press publications.” Relying on the experiences and debates surrounding the German and Spanish laws in this area, this study presents a legal analysis of the proposal for an EU related right for press publishers (RRPP). After a brief overview over the general limits of the EU competence to introduce such a new related right, the study critically examines the purpose of an RRPP. On this basis, the next section distinguishes three versions of an RRPP with regard to its subject-matter and scope, and considers the practical and legal implications of these alternatives, in particular having regard to fundamental rights.
We consider unification of terms under the equational theory of two-sided distributivity D with the axioms x*(y+z) = x*y + x*z and (x+y)*z = x*z + y*z. The main result of this paper is that Dunification is decidable by giving a non-deterministic transformation algorithm. The generated unification are: an AC1-problem with linear constant restrictions and a second-order unification problem that can be transformed into a word-unification problem that can be decided using Makanin's algorithm. This solves an open problem in the field of unification. Furthermore it is shown that the word-problem can be decided in polynomial time, hence D-matching is NP-complete.
We show how Sestoft’s abstract machine for lazy evaluation of purely functional programs can be extended to evaluate expressions of the calculus CHF – a process calculus that models Concurrent Haskell extended by imperative and implicit futures. The abstract machine is modularly constructed by first adding monadic IO-actions to the machine and then in a second step we add concurrency. Our main result is that the abstract machine coincides with the original operational semantics of CHF, w.r.t. may- and should-convergence.
Spacially dispersed transnational professional communities can be perceived of as cultural formations living in a global frame of reference, transgressing existing political and cultural boundaries. In their capacity as members of local technical and knowledgebased elites, they take part in circulating and connecting cultural meanings that are both locally produced, and continuously re-working non- local flows. I argue that those elites can be described as actors at cultural interfaces, taking part in shaping and mediating social change. The aim is twofold: one, to point to mutually opposed tendencies, and ambivalences in the framework of a „culture of change“, and two, to look into the question how such situations and groups can be methodologically approached.
We relate time-varying aggregate ambiguity (V-VSTOXX) to individual investor trading. We use the trading records of more than 100,000 individual investors from a large German online brokerage from March 2010 to December 2015. We find that an increase in ambiguity is associated with increased investor activity. It also leads to a reduction in risk-taking which does not reverse over the following days. When ambiguity is high, the effect of sentiment looms larger. Survey evidence reveals that ambiguity averse investors are more prone to ambiguity shocks. Our results are robust to alternative survey-, newspaper- or market-based ambiguity measures.
Motivated by tools for automaed deduction on functional programming languages and programs, we propose a formalism to symbolically represent $\alpha$-renamings for meta-expressions. The formalism is an extension of usual higher-order meta-syntax which allows to $\alpha$-rename all valid ground instances of a meta-expression to fulfill the distinct variable convention. The renaming mechanism may be helpful for several reasoning tasks in deduction systems. We present our approach for a meta-language which uses higher-order abstract syntax and a meta-notation for recursive let-bindings, contexts, and environments. It is used in the LRSX Tool -- a tool to reason on the correctness of program transformations in higher-order program calculi with respect to their operational semantics. Besides introducing a formalism to represent symbolic $\alpha$-renamings, we present and analyze algorithms for simplification of $\alpha$-renamings, matching, rewriting, and checking $\alpha$-equivalence of symbolically $\alpha$-renamed meta-expressions.
In this study we investigate which economic ideas were prevalent in the macroprudential discourse post-crises in order to understand the availability of ideas for reform minded agents. We base our analysis on new findings in the field of ideational shifts and regulatory science, which posit that change-agents engage with new ideas pragmatically and strategically in their effort to have their economic ideas institutionalized. We argue that in these epistemic battles over new regulation, scientific backing by academia is the key resource determining the outcome. We show that the present reforms implemented internationally follow this pattern. In our analysis we contrast the entire discourse on systemic risk and macroprudential regulation with Borio’s initial 2003 proposal for a macroprudential framework. We find that mostly cross-sectional measures targeted towards increasing the resilience of the financial system rather than inter-temporal measures dampening the financial cycle have been implemented. We provide evidence for the lacking support of new macroprudential thinking within academia and argue that this is partially responsible for the lack of anti-cyclical macroprudential regulation. Most worryingly, the financial cycle is largely absent in the academic discourse and is only tacitly assumed instead of fully fledged out in technocratic discourses, pointing to the possibility that no anti-cyclical measures will be forthcoming.
Algorithmic trading engines versus human traders – do they behave different in securities markets?
(2009)
After exchanges and alternative trading venues have introduced electronic execution mechanisms worldwide, the focus of the securities trading industry shifted to the use of fully electronic trading engines by banks, brokers and their institutional customers. These Algorithmic Trading engines enable order submissions without human intervention based on quantitative models applying historical and real-time market data. Although there is a widespread discussion on the pros and cons of Algorithmic Trading and on its impact on market volatility and market quality, little is known on how algorithms actually place their orders in the market and whether and in which respect this differs form other order submissions. Based on a dataset that – for the first time – includes a specific flag to enable the identification of orders submitted by Algorithmic Trading engines, the paper investigates the extent of Algorithmic Trading activity and specifically their order placement strategies in comparison to human traders in the Xetra trading system. It is shown that Algorithmic Trading has become a relevant part of overall market activity and that Algorithmic Trading engines fundamentally differ from human traders in their order submission, modification and deletion behavior as they exploit real-time market data and latest market movements.
Projected demographic changes in industrialized and developing countries vary in extent and timing but will reduce the share of the population in working age everywhere. Conventional wisdom suggests that this will increase capital intensity with falling rates of return to capital and increasing wages. This decreases welfare for middle aged asset rich households. This paper takes the perspective of the three demographically oldest European nations — France, Germany and Italy — to address three important adjustment channels to dampen these detrimental effects of aging in these countries: investing abroad, endogenous human capital formation and increasing the retirement age. Our quantitative finding is that endogenous human capital formation in combination with an increase in the retirement age has strong implications for economic aggregates and welfare, in particular in the open economy. These adjustments reduce the maximum welfare losses of demographic change for households alive in 2010 by about 2.2 percentage points in terms of a consumption equivalent variation.
The importance of agile methods has increased in recent years, not only to manage software development processes but also to establish flexible and adaptive organisational structures, which are essential to deal with disruptive changes and build successful digital business strategies. This paper takes an industry-specific perspective by analysing the dissemination, objectives and relative popularity of agile frameworks in the German banking sector. The data provides insights into expectations and experiences associated with agile methods and indicates possible implementation hurdles and success factors. Our research provides the first comprehensive analysis of agile methods in the German banking sector. The comparison with a selected number of fintechs has revealed some differences between banks and fintechs. We found that almost all banks and fintechs apply agile methods in IT-related projects. However, fintechs have relatively more experience with agile methods than banks and use them more intensively. Scrum is the most relevant framework used in practice. Scaled agile frameworks are so far negligible in the German banking sector. Acceleration of projects is apparently the most important objective of deploying agile methods. In addition, agile methods can contribute to cost savings and lead to improved quality and innovation performance, though for banks it is evidently more challenging to reach their respective targets than for fintechs. Overall our findings suggest that German banks are still in a maturing process of becoming more agile and that there is room for an accelerated adoption of agile methods in general and scaled agile frameworks in particular.
We analyze the macroeconomic implications of increasing the top marginal income tax rate using a dynamic general equilibrium framework with heterogeneous agents and a fiscal structure resembling the actual U.S. tax system. The wealth and income distributions generated by our model replicate the empirical ones. In two policy experiments, we increase the statutory top marginal tax rate from 35 to 70 percent and redistribute the additional tax revenue among households, either by decreasing all other marginal tax rates or by paying out a lump-sum transfer to all households. We find that increasing the top marginal tax rate decreases inequality in both wealth and income but also leads to a contraction of the aggregate economy. This is primarily driven by the negative effects that the tax change has on top income earners. The aggregate gain in welfare is sizable in both experiments mainly due to a higher degree of distributional equality.
We analyze the macroeconomic implications of increasing the top marginal income tax rate using a dynamic general equilibrium framework with heterogeneous agents and a fiscal structure resembling the actual U.S. tax system. The wealth and income distributions generated by our model replicate the empirical ones. In two policy experiments, we increase the statutory top marginal tax rate from 35 to 70 percent and redistribute the additional tax revenue among households, either by decreasing all other marginal tax rates or by paying out a lump-sum transfer to all households. We find that increasing the top marginal tax rate decreases inequality in both wealth and income but also leads to a contraction of the aggregate economy. This is primarily driven by the negative effects that the tax change has on top income earners. The aggregate gain in welfare is sizable in both experiments mainly due to a higher degree of distributional equality.
We investigate consumption patterns in Europe with supervised machine learning methods and reveal differences in age and wealth impact across countries. Using data from the third wave (2017) of the Eurosystem’s Household Finance and Consumption Survey (HFCS), we assess how age and (liquid) wealth affect the marginal propensity to consume (MPC) in the Netherlands, Germany, France, and Italy. Our regression analysis takes the specification by Christelis et al. (2019) as a starting point. Decision trees are used to suggest alternative variable splits to create categorical variables for customized regression specifications. The results suggest an impact of differing wealth distributions and retirement systems across the studied Eurozone members and are relevant to European policy makers due to joint Eurozone monetary policy and increasing supranational fiscal authority of the EU. The analysis is further substantiated by a supervised machine learning analysis using a random forest and XGBoost algorithm.
What processes transform (im)mobile individuals into ‘migrants’ and geographic movements across political-territorial borders into ‘migration’? To address this question, the article develops the doing migration approach, which combines perspectives from social constructivism, praxeology and the sociologies of knowledge and culture. ‘Doing migration’ starts with the processes of social attribution that differentiate between ‘migrants’ and ‘non-migrants’. Embedded in institutional, organizational and interactional routines these attributions generate unique social orders of migration. By illustrating these conceptual ideas, the article provides insights into the elements of the contemporary European order of ‘migration’. Its institutional routines contribute to the emergence of a European migration regime that involves narratives of economization, securitization and humanitarization. The organizational routines of the European migration order involve surveillance and diversity management, which have disciplining effects on those defined as ‘migrants’. The routines of everyday face-to-face interactions produce various micro-forms of doing ‘migration’ through stigmatization and othering, but they also provide opportunities to resist a social attribution as ‘migrant’.
Event studies have become increasingly important in securities fraud litigation after the Supreme Court’s decision in Halliburton II. Litigants have used event study methodology, which empirically analyzes the relationship between the disclosure of corporate information and the issuer’s stock price, to provide evidence in the evaluation of key elements of federal securities fraud, including materiality, reliance, causation, and damages. As the use of event studies grows and they increasingly serve a gatekeeping function in determining whether litigation will proceed beyond a preliminary stage, it will be critical for courts to use them correctly.
This Article explores an array of considerations related to the use of event studies in securities fraud litigation. It starts by describing the basic function of the event study: to determine whether a highly unusual price movement has occurred and the traditional statistical approach to making that determination. The Article goes on to identify special features of securities fraud litigation that distinguish litigation from the scholarly context in which event studies were developed. The Article highlights the fact that the standard approach can lead to the wrong conclusion and describes the adjustments necessary to address the litigation context. We use the example of six dates in the Halliburton litigation to illustrate these points.
Finally, the Article highlights the limitations of event studies – what they can and cannot prove – and explains how those limitations relate to the legal issues for which they are introduced. These limitations bear upon important normative questions about the role event studies should play in securities fraud litigation.
Advertising arbitrage
(2014)
Speculators often advertise arbitrage opportunities in order to persuade other investors and thus accelerate the correction of mispricing. We show that in order to minimize the risk and the cost of arbitrage an investor who identifies several mispriced assets optimally advertises only one of them, and overweights it in his portfolio; a risk-neutral arbitrageur invests only in this asset. The choice of the asset to be advertised depends not only on mispricing but also on its "advertisability" and accuracy of future news about it. When several arbitrageurs identify the same arbitrage opportunities, their decisions are strategic complements: they invest in the same asset and advertise it. Then, multiple equilibria may arise, some of which inefficient: arbitrageurs may correct small mispricings while failing to eliminate large ones. Finally, prices react more strongly to the ads of arbitrageurs with a successful track record, and reputation-building induces high-skill arbitrageurs to advertise more than others.
Advertising arbitrage
(2020)
Arbitrageurs with a short investment horizon gain from accelerating price discovery by advertising their private information. However, advertising many assets may overload investors' attention, reducing the number of informed traders per asset and slowing price discovery. So arbitrageurs optimally concentrate advertising on just a few assets, which they overweight in their portfolios. Unlike classic insiders, advertisers prefer assets with the least noise trading. If several arbitrageurs share information about the same assets, inefficient equilibria can arise, where investors' attention is overloaded and substantial mispricing persists. When they do not share, the overloading of investors' attention is maximal.
According to the present state of research, there seems to be no language which shows possessive classifiers and possessive verbs corresponding to English "to have" at the same time. In classifier languages predicative possession is expressed by verbless clauses, i.e. by existential clauses ("there is my possessed item"), equative clauses ("the possessed item is mine" "that is my possessed item") or by locative expressions ("the possessed item is near me"), in which the classifier in the case of non-inherent possession marks the nature of the relationship. While most Melanesian languages, as for instance Fijian, Lenakel, Pala and Tolai are classifier languages, Nguna, a Melanesian language spoken in Vanuatu, only shows traces of the Melanesian possessive classifier system, but, in contrast to the other Melanesian languages, it has a possessive verb, namely 'peani' "to have". In order to show how the Nguna possessive constructions deviate from the common Melanesian type, we shall start with a brief description of the Melanesian possessive constructions in general, and that of Fijian in particular.
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and must-convergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extension.
Since 2015, 90 taxa of lichens and 18 lichenicolous fungi have been recorded from Turkey for the first time. Further 707 taxa are new to one or more provinces. In this paper 2 species are new to Turkey. A list of 82 published papers is also provided as a supplement to the bibliography of the 2017 Checklist (John & Türk (2017) of Turkish Lichens.
This paper explores consequences of consumer education on prices and welfare in retail financial markets when some consumers are naive about shrouded add-on prices and firms try to exploit it. Allowing for different information and pricing strategies we show that education is unlikely to push firms to disclose prices towards all consumers, which would be socially efficient. Instead, price discrimination emerges as a new equilibrium. Further, due to a feedback on prices, education that is good for consumers who become sophisticated may be bad for consumers who stay naive and even for the group of all consumers as a whole
This study examines the role of actual and perceived financial sophistication (i.e., financial literacy and confidence) for individuals' wealth accumulation. Using survey data from the German SAVE initiative, we find strong gender- and education-related differences in the distribution of the two variables and their effects on wealth: As financial literacy rises in formal education, whereas confidence increases in education for men but decreases for women, we observe that women become strongly underconfident with higher education, while men remain overconfident.Regarding wealth accumulation, we show that financial literacy has a positive effect that is stronger for women than for men and that is increasing (decreasing) in education for women (men). Confidence, however, supports only highly-educated men's wealth. When considering different channels for wealth accumulation, we observe that financial literacy is more important for current financial market participation, whereas confidence is more strongly associated with future-oriented financial planning. Overall, we demonstrate that highly-educated men's wealth levels benefit from their overconfidence via all financial decisions considered, but highly-educated women's financial planning suffers from their underconfidence. This may impair their wealth levels in old age.
A number of recent studies have suggested that activist stabilization policy rules responding to inflation and the output gap can attain simultaneously a low and stable rate of inflation as well as a high degree of economic stability. The foremost example of such a strategy is the policy rule proposed by Taylor (1993). In this paper, I demonstrate that the policy settings that would have been suggested by this rule during the 1970s, based on real-time data published by the U.S. Commerce Department, do not greatly differ from actual policy during this period. To the extent macroeconomic outcomes during this period are considered unfavorable, this raises questions regarding the usefulness of this strategy for monetary policy. To the extent the Taylor rule is believed to provide a reasonable guide to monetary policy, this finding raises questions regarding earlier critiques of monetary policy during the 1970s.
Acquisition of aspect
(2003)
Acquiring foreign firms far away might be hazardous to your share price: evidence from Germany
(2007)
This paper examines shareholder wealth effects of cross-border acquisitions. In a sample of 155 large acquisitions by German corporations from 1985–2006 international transactions in total do not lead to significant announcement returns. Geography, however, makes a difference: Shareholders of acquiring firms gain 6.5% in cross-border transactions into countries that have a common border with Germany but lose 4.4% in other international transactions. We find proximity to be one of the most important success factors in cross-border mergers and acquisitions, even when we control for firm, deal and country characteristics.
A resampling method based on the bootstrap and a bias-correction step is developed for improving the Value-at-Risk (VaR) forecasting ability of the normal-GARCH model. Compared to the use of more sophisticated GARCH models, the new method is fast, easy to implement, numerically reliable, and, except for having to choose a window length L for the bias-correction step, fully data driven. The results for several different financial asset returns over a long out-of-sample forecasting period, as well as use of simulated data, strongly support use of the new method, and the performance is not sensitive to the choice of L. Klassifizierung: C22, C53, C63, G12
This chapter outlines the conditions under which accounting-based smoothing can be beneficial for policyholders who hold with-profit or participating payout life annuities (PLAs). We use a realistically-calibrated model of PLAs to explore how alternative accounting techniques influence policyholder welfare as well as insurer profitability and stability. We find that accounting smoothing of participating life annuities is favorable to consumers and insurers, as it mitigates the impact of short-term volatility and enhances the utility of these long-term annuity contracts.
Accounting for financial stability: Bank disclosure and loss recognition in the financial crisis
(2020)
This paper examines banks’ disclosures and loss recognition in the financial crisis and identifies several core issues for the link between accounting and financial stability. Our analysis suggests that, going into the financial crisis, banks’ disclosures about relevant risk exposures were relatively sparse. Such disclosures came later after major concerns about banks’ exposures had arisen in markets. Similarly, the recognition of loan losses was relatively slow and delayed relative to prevailing market expectations. Among the possible explanations for this evidence, our analysis suggests that banks’ reporting incentives played a key role, which has important implications for bank supervision and the new expected loss model for loan accounting. We also provide evidence that shielding regulatory capital from accounting losses through prudential filters can dampen banks’ incentives for corrective actions. Overall, our analysis reveals several important challenges if accounting and financial reporting are to contribute to financial stability.
This paper investigates what we can learn from the financial crisis about the link between accounting and financial stability. The picture that emerges ten years after the crisis is substantially different from the picture that dominated the accounting debate during and shortly after the crisis. Widespread claims about the role of fair-value (or mark-to-market) accounting in the crisis have been debunked. However, we identify several other core issues for the link between accounting and financial stability. Our analysis suggests that, going into the financial crisis, banks’ disclosures about relevant risk exposures were relatively sparse. Such disclosures came later after major concerns about banks’ exposures had arisen in markets. Similarly, banks delayed the recognition of loan losses. Banks’ incentives seem to drive this evidence, suggesting that reporting discretion and enforcement deserve careful consideration. In addition, bank regulation through its interlinkage with financial accounting may have dampened banks’ incentives for corrective actions. Our analysis illustrates that a number of serious challenges remain if accounting and financial reporting are to contribute to financial stability.
Accounting for financial instruments in the banking industry: conclusions from a simulation model
(2003)
The paper analyses the effects of three sets of accounting rules for financial instruments - Old IAS before IAS 39 became effective, Current IAS or US GAAP, and the Full Fair Value (FFV) model proposed by the Joint Working Group (JWG) - on the financial statements of banks. We develop a simulation model that captures the essential characteristics of a modern universal bank with investment banking and commercial banking activities. We run simulations for different strategies (fully hedged, partially hedged) using historical data from periods with rising and falling interest rates. We show that under Old IAS a fully hedged bank can portray its zero economic earnings in its financial statements. As Old IAS offer much discretion, this bank may also present income that is either positive or negative. We further show that because of the restrictive hedge accounting rules, banks cannot adequately portray their best practice risk management activities under Current IAS or US GAAP. We demonstrate that - contrary to assertions from the banking industry - mandatory FFV accounting adequately reflects the economics of banking activities. Our detailed analysis identifies, in addition, several critical issues of the accounting models that have not been covered in previous literature.
Recent changes in accounting regulation for financial instruments (SFAS 133, IAS 39) have been heavily criticized by representatives from the banking industry. They argue for retaining a historical cost based "mixed model" where accounting for financial instruments depends on their designation to either trading or nontrading activities. In order to demonstrate the impact of different accounting models for financial instruments on the financial statements of banks, we develop a bank simulation model capturing the essential characteristics of a modern universal bank with investment banking and commercial banking activities. In our simulations we look at different scenarios with periods of increasing/decreasing interest rates using historical data and with different banking strategies (fully hedged; partially hedged). The financial statements of our model bank are prepared under different accounting rules ("Old" IAS before implementation of IAS 39; current IAS) with and without hedge accounting as offered by the respective sets of rules. The paper identifies critical issues of applying the different accounting rules for financial instruments to the activities of a universal bank. It demonstrates important shortcomings of the "Old" IAS rules (before IAS 39), and of the current IAS rules. Under the current IAS rules the results of a fully hedged bank may have to show volatility in income statements due to changes in market interest rates. Accounting results of a partially hedged bank in the same scenario may be less affected even though there are economic gains or losses.
Returns to experience for U.S. workers have changed over the post-war period. This paper argues that a simple model goes a long way towards replicating these changes. The model features three well-known ingredients: (i) an aggregate production function with constant skill-biased technical change; (ii) cohort qualities that vary with average years of schooling; and crucially (iii) time-invariant age-efficiency profiles. The model quantitatively accounts for changes in longitudinal and cross-sectional returns to experience, as well as the differential evolution of the college wage premium for young and old workers.
From the late middle ages to early modern times (ca. 1200-1600) the Lübeck City Council was the most important courthouse in the Baltic. About 100 cities and towns on its shores lived according to the law of Lübeck. The paper deals with the old theory that Imperial law, i.e. mainly the learned Ius commune, was generally rejected by the council on the grounds of its foreign nature. The paper rejects this view with the help of 8 case studies. There exist rather spectacular statements against Imperial Law, but a closer look reveals that they have to be seen in the light of a specific practical context. They must not be confounded with general statements in which the council had no interest. Its attitude towards Learned Law was flexible and purely pragmatic.
It is my intention to make two major points in this paper: 1. The first has to do with finding a frame within which the modal expressions of one particular Ancient IE [Indoeuropean] language – I have chosen Classical Greek – can be best described. I shall try to point out that the regularities which we find in these expressions must depend on an underlying principle, represented by abstract structures. These structures are semanto-syntactic, which means that the semantic properties or bundles of properties are arranged not in a linear order but in a hierarchical order, analogous to a bracketing in a PS structure. The abstract structures we propose have, of course, a very tentative character. They can only be accepted as far as evidence for them can be furnished. 2. My second point has to do with the modal verb forms that were the object of the studies of most Indo-Europeanists. If in the innermost bracket of a semanto-syntactic structure two semantic properties or bundles of properties can be exchanged without any further change in the total structure, and if this change is correlated with a change in verbal mood forms and nothing else, then I think we are faced with a case where these forms can be said to have a meaning of their own. I shall also try to show how these meanings are to be understood as bundles of features rather than as unanalyzed terms. In my final remarks: I shall try to outline the bearing these views have on comparative IE linguistics.
We develop a model that endogenizes the manager's choice of firm risk and of inside debt investment strategy. Our model delivers two predictions. First, managers have an incentive to reduce the correlation between inside debt and company stock in bad times. Second, managers that reduce such a correlation take on more risk in bad times. Using a sample of U.S. public firms, we provide evidence consistent with the model's predictions. Our results suggest that the weaker link between inside debt and company stock in bad times does not translate into a mitigation of debt-equity conflicts.
The long-run consumption risk model provides a theoretically appealing explanation for prominent asset pricing puzzles, but its intricate structure presents a challenge for econometric analysis. This paper proposes a two-step indirect inference approach that disentangles the estimation of the model's macroeconomic dynamics and the investor's preference parameters. A Monte Carlo study explores the feasibility and efficiency of the estimation strategy. We apply the method to recent U.S. data and provide a critical re-assessment of the long-run risk model's ability to reconcile the real economy and financial markets. This two-step indirect inference approach is potentially useful for the econometric analysis of other prominent consumption-based asset pricing models that are equally difficult to estimate.
We model the motives for residents of a country to hold foreign assets, including the precautionary motive that has been omitted from much previous literature as intractable. Our model captures many of the principal insights from the existing specialized literature on the precautionary motive, deriving a convenient formula for the economy’s target value of assets. The target is the level of assets that balances impatience, prudence, risk, intertemporal substitution, and the rate of return. We use the model to shed light on two topical questions: The “upstream” flows of capital from developing countries to advanced countries, and the long-run impact of resorbing global financial imbalances