Refine
Year of publication
Document Type
- Working Paper (2354) (remove)
Language
- English (2354) (remove)
Has Fulltext
- yes (2354) (remove)
Is part of the Bibliography
- no (2354)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1380)
- Wirtschaftswissenschaften (1309)
- Sustainable Architecture for Finance in Europe (SAFE) (742)
- House of Finance (HoF) (608)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (149)
- Informatik (114)
- Foundation of Law and Finance (51)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
Beyond Weltpolitik, self-containment and civilian power : United Germany´s normalizing ambitions
(1999)
In many parts of the world, the centralized grid provides energy to the population only to a limited extent. The electrification for sub-Saharan Africa countries is the lowest in the world, representing half of the world's population withoutelectricity. However, during the last years there has been an increased attention to rural areas in the Global South beyond the centralised grid, especially with respect to improved possibilities of solar power systems. The transition from one dominant form of energy provision to various alternatives includes different dimensions and depends on specific socio-spatial contexts. Energy systems are framed within systems of spatial practices, performed by a variety of involved actors, like consumers, local suppliers, international for-profit companies, international development donors as well as national and regional authorities. As such power systems arealways cause and effect of socio-technical change This study takes the example of Rwanda to analyse the marketization of decentralised energy systems. Based on empirical field work with energy entrepreneurs it combines Post-Colonial Theory with Science and Technology-Studies to theorise the role of energy to the social production of space beyond the grid.
The article discusses the methodology adopted for a cross-linguistic synchronic and diachronic corpus study on indefinites. The study covered five indefinite expressions, each in a different language. The main goal of the study was to verify the distribution of these indefinites synchronically and to attest their historical development. The methodology we used is a form of functional labeling which combines both context (syntax) and meaning (semantics) using as a starting point Haspelmath’s (1997) functional map. In the article we identify Haspelmath’s functions with logico-semantic interpretations and propose a binary branching decision tree assigning each instance of an indefinite exactly one function in the map.
Discussions regarding the planned European Deposit Insurance Scheme (EDIS), the missing third pillar of the European Banking Union, have been ongoing since the Commission published its initial legisla-tive proposal in 2015. A breakthrough in negotiations has yet to be achieved. The gridlock on EDIS is most commonly attributed to moral hazard concerns over insufficient risk reduction harboured on the side of northern member states, particularly Germany, due to the weak state of some other member states’ banking sectors. While moral hazard based on uneven risk reduction is helpful for explaining divergent member-state preferences on the scope of necessary risk reduction, this does not explain preferences on the institutional design of EDIS. In this paper, we argue that contrary to persistent differences on necessary risk reduction, preferences regarding the institutional design of EDIS have become more closely aligned. We analyse how preferences on EDIS developed in the key member states of Germany, France, and Italy. In all sampled countries, we find path-dependent benefits con-nected to the current design of national Deposit Guarantee Schemes (DGS) that shifted preferences of the banking sector or significant subsectors in favour of retaining national DGSs. Overall, given that a compromise on risk-reduction can be accomplished, we argue that current preferences in these key member states provide an opportunity to implement EDIS in the form of a reinsurance system that maintains national DGSs in combination with a supranational fund.
What are the aggregate and distributional consequences of the relationship be-tween an individual’s social network and financial decisions? Motivated by several well-documented facts about the influence of social connections on financial decisions, we build and calibrate a model of stock market participation with a social network that emphasizes the interplay between connectivity and network structure. Since connections to informed agents help spread information, there is a pivotal role for factors that determine sorting among agents. An increase in the average number of connections raises the average participation rate, mostly due to richer agents. A higher degree of sorting benefits richer agents by creating clusters where information spreads more efficiently. We show empirical evidence consistent with the importance of connectivity and sorting. We discuss several new avenues for future research into the aggregate impact of peer effects in finance.
Eine Beteiligung des Managements an Gewinngrößen spielt eine wichtige Rolle bei der Ausrichtung von Managemententscheidungen auf die Ziele der Unternehmenseigentümer. Dieser Beitrag zeigt auf, unter welchen Gewinnermittlungsregeln ein Agent zu optimalen Investitionsentscheidungen motiviert wird, wenn er an den Residualgewinnen beteiligt wird. Dieser Beitrag beschäftigt sich insbesondere mit der Frage, ob zum Zwecke einer optimalen Investitionssteuerung, Fertigerzeugnisse zu Vollkosten oder zu Teilkosten bewertet werden sollen. Vor diesem Hintergrund werden ebenfalls verschiedene Wertansätze für Forderungen auf ihre Anreizwirkungen untersucht.
How might retirees consider deploying the retirement assets accumulated in a defined contribution pension plan? One possibility would be to purchase an immediate annuity. Another approach, called the "phased withdrawal" strategy in the literature, would have the retiree invest his funds and then withdraw some portion of the account annually. Using this second tactic, the withdrawal rate might be determined according to a fixed benefit level payable until the retiree dies or the funds run out, or it could be set using a variable formula, where the retiree withdraws funds according to a rule linked to life expectancy. Using a range of data consistent with the German experience, we evaluate several alternative designs for phased withdrawal strategies, allowing for endogenous asset allocation patterns, and also allowing the worker to make decisions both about when to retire and when to switch to an annuity. We show that one particular phased withdrawal rule is appealing since it offers relatively low expected shortfall risk, good expected payouts for the retiree during his life, and some bequest potential for the heirs. We also find that unisex mortality tables if used for annuity pricing can make women's expected shortfalls higher, expected benefits higher, and bequests lower under a phased withdrawal program. Finally, we show that delayed annuitization can be appealing since it provides higher expected benefits with lower expected shortfalls, at the cost of somewhat lower anticipated bequests. Klassifikation: G22, G23, J26, J32, H55 . January 2004.
Traditional tests of the CAPM following the Fama / MacBeth (1973) procedure are tests of the joint hypotheses that there is a relationship between beta and realized return and that the market risk premium is positive. The conditional test procedure developed by Pettengill / Sundaram / Mathur (1995) allows to independently test the hypothesis of a relation between beta and realized returns. Monte Carlo simulations show that the conditional test reliably identifies this relation. In an empirical examination for the German stock market we find a significant relation between beta and return. Previous studies failed to identify this relationship probably because the average market risk premium in the sample period was close to zero. Our results provide a justification for the use of betas estimated from historical return data by portfolio managers.
Decisions under ambiguity depend on both the belief regarding possible scenarios and the attitude towards ambiguity. This paper exclusively investigates the belief formation and belief updating process under ambiguity, using laboratory experiments. The results show that half of the subjects tend to adopt a simple heuristic strategy when updating beliefs, while the other half seems to partially adopt the Bayesian updates. We recover beliefs, represented by distributions of the priors/posteriors. The recoverable initial priors mostly follow a uniform distribution. We also find that subjects on average demonstrate slight pessimism in an ambiguous environment.
If there is one thing to be learned from David Foster Wallace, it is that cultural transmission is a tricky game. This was a problem Wallace confronted as a literary professional, a university-based writer during what Mark McGurl has called the Program Era. But it was also a philosophical issue he grappled with on a deep level as he struggled to combat his own loneliness through writing. This fundamental concern with literature as a social, collaborative enterprise has also gained some popularity among scholars of contemporary American literature, particularly McGurl and James English: both critics explore the rules by which prestige or cultural distinction is awarded to authors (English; McGurl). Their approach requires a certain amount of empirical work, since these claims move beyond the individual experience of the text into forms of collective reading and cultural exchange influenced by social class, geographical location, education, ethnicity, and other factors. Yet McGurl and English's groundbreaking work is limited by the very forms of exclusivity they analyze: the protective bubble of creative writing programs in the academy and the elite economy of prestige surrounding literary prizes, respectively. To really study the problem of cultural transmission, we need to look beyond the symbolic markets of prestige to the real market, the site of mass literary consumption, where authors succeed or fail based on their ability to speak to that most diverse and complicated of readerships: the general public. Unless we study what I call the social lives of books, we make the mistake of keeping literature in the same ascetic laboratory that Wallace tried to break out of with his intense authorial focus on popular culture, mass media, and everyday life.
As 2021 draws to a close, Covid-19 continues to prevail worldwide. With the proverbial return to normalcy still appearing distant, there is now a tacit acceptance globally that at least for the foreseeable future, we must live with Covid-19. Given that Covid-19 is an infectious disease—which by definition is transmitted from person to person—the continued prevalence of Covid-19 has implications for how local authorities, communities, and individuals around the world will approach public spaces. While it may be premature to assume a so-called coronacene (see Higgins et al. 2020), going into the future our use of public spaces will be overshadowed by the possibility, even if remote, of illness or death by virtue of close proximity to other individuals.
Along with parks and squares, streets and avenues, bazaars constitute ubiquitous public spaces, including in countries of the developing world, such as Armenia and Georgia, our countries of discussion here. Although there is not a clear bifurcation between bazaars and other types of marketplaces, bazaars will usually be comprised of a multitude of nonfranchised, self-owned, small businesses that are variously family-run or rely on family labor. They are usually perceived as chaotic places that lack hygiene (the purportedly unhygienic character of the bazaar was brought to the forefront with the pandemic, given how Covid-19’s origin is widely assumed to be a Wuhan wet market).
In Armenia and Georgia, and indeed, across the former Soviet Union, bazaars are a source of employment for the urban and peri-urban population; they also offer goods at price points attractive to a wide demographic. This working paper builds on the premise that the bazaar is an informal institution. Bazaar traders will typically assemble networks by themselves (with manufacturers and wholesalers, buyers and transporters). These networks will usually vary from one business to another. Also, ownership and rent structures are frequently opaque, and the majority of commercial transactions are in cash, which does not appear in state records. As a consequence, for the state, many small businesses do not exist (Fehlings and Karrar 2016, 2020).
For those of us researching bazaar trading, Covid-19 has given rise to a basic question: How have independent businesses been transformed by the pandemic? This working paper is an attempt to parse this question in light of developments in Armenia and Georgia. In this working paper, we suggest that the Covid-19 pandemic has deepened informality in the bazaar. That being said, we want to underscore that the present discussion is exploratory. Our ethnography remains limited, and we look forward to returning to the field as soon as it is safe to do so.
A novel spatial autoregressive model for panel data is introduced, which incor-porates multilayer networks and accounts for time-varying relationships. Moreover, the proposed approach allows the structural variance to evolve smoothly over time and enables the analysis of shock propagation in terms of time-varying spillover effects.
The framework is applied to analyse the dynamics of international relationships among the G7 economies and their impact on stock market returns and volatilities. The findings underscore the substantial impact of cooperative interactions and highlight discernible disparities in network exposure across G7 nations, along with nuanced patterns in direct and indirect spillover effects.
We estimate a Bayesian vector autoregression for the U.K. with drifting coefficients and stochastic volatilities. We use it to characterize posterior densities for several objects that are useful for designing and evaluating monetary policy, including local approximations to the mean, persistence, and volatility of inflation. We present diverse sources of uncertainty that impinge on the posterior predictive density for inflation, including model uncertainty, policy drift, structural shifts and other shocks. We use a recently developed minimum entropy method to bring outside information to bear on inflation forecasts. We compare our predictive densities with the Bank of England's fan charts.
In this paper we adapt the Hamiltonian Monte Carlo (HMC) estimator to DSGE models, a method presently used in various fields due to its superior sampling and diagnostic properties. We implement it into a state-of-theart, freely available high-performance software package, STAN. We estimate a small scale textbook New-Keynesian model and the Smets-Wouters model using US data. Our results and sampling diagnostics confirm the parameter estimates available in existing literature. In addition, we find bimodality in the Smets-Wouters model even if we estimate the model using the original tight priors. Finally, we combine the HMC framework with the Sequential Monte Carlo (SMC) algorithm to create a powerful tool which permits the estimation of DSGE models with ill-behaved posterior densities.
In this paper we adopt the Hamiltonian Monte Carlo (HMC) estimator for DSGE models by implementing it into a state-of-the-art, freely available high-performance software package. We estimate a small scale textbook New-Keynesian model and the Smets-Wouters model on US data. Our results and sampling diagnostics confirm the parameter estimates available in existing literature. In addition we combine the HMC framework with the Sequential Monte Carlo (SMC) algorithm which permits the estimation of DSGE models with ill-behaved posterior densities.
We study the problem of a policymaker who seeks to set policy optimally in an economy where the true economic structure is unobserved, and policymakers optimally learn from their observations of the economy. This is a classic problem of learning and control, variants of which have been studied in the past, but little with forward-looking variables which are a key component of modern policy-relevant models. As in most Bayesian learning problems, the optimal policy typically includes an experimentation component reflecting the endogeneity of information. We develop algorithms to solve numerically for the Bayesian optimal policy (BOP). However the BOP is only feasible in relatively small models, and thus we also consider a simpler specification we term adaptive optimal policy (AOP) which allows policymakers to update their beliefs but shortcuts the experimentation motive. In our setting, the AOP is significantly easier to compute, and in many cases provides a good approximation to the BOP. We provide a simple example to illustrate the role of learning and experimentation in an MJLQ framework. JEL Classification: E42, E52, E58
Basel III and CEO compensation in banks: pay structures as a regulatory signal : [March 6, 2013]
(2013)
This paper proposes a new regulatory approach that implements capital requirements contingent on managerial compensation. We argue that excessive risk taking in the financial sector originates from the shareholder moral hazard created by government guarantees rather than from corporate governance failures within banks. The idea of the proposed regulation is to utilize the compensation scheme to drive a wedge between the interests of top management and shareholders to counteract shareholder risk-shifting incentives. The decisive advantage of this approach compared to existing regulation is that the regulator does not need to be able to properly measure the bank investment risk, which has been shown to be a difficult task during the 2008-2009 financial crisis.
Bargaining with a bank
(2018)
This paper examines bargaining as a mechanism to resolve information problems. To guide the analysis, I develop a parsimonious model of a credit negotiation between a bank and firms with varying levels of impatience. In equilibrium, impatient firms accept the bank’s offer immediately, while patient firms wait and negotiate price adjustments. I test the empirical predictions using a hand-collected dataset on credit line negotiations. Firms signing the bank’s offer right away draw down their line of credit after origination and default more than late signers. Late signers negotiate price adjustments more frequently, and, consistent with the model, these adjustments predict better ex post performance.
Since the 2008 financial crisis, European largest banks’ size and business models have largely remained unchallenged. Is that because of banks’ continued structural power over States? This paper challenges the view that States are sheer hostages of banks’ capacity to provide credit to the real economy – which is the conventional definition of structural power. Instead, it sheds light on the geo-economic dimension of banks’ power: key public officials conceive the position of “their own” market-based banks in global financial markets as a crucial dimension of State power. State priority towards banking thus result from political choices over what structurally matters the most for the State. Based on a discourse analysis of parliamentary debates in France, Germany and Spain between 2010 and 2020 as well as on a comparative analysis of the implementation of a special tax on banks in the early 2010s, this paper shows that State’s Finance ministries tend to prioritize geo-economic considerations over credit to firms. By contrast, Parliaments tend to prioritize investment. Power dynamics within the State thus largely shape political priorities towards banking at the domestic and international levels.
We provide an assessment of the Basel Committee on Banking Supervision (BCBS) proposal to restrict the internal ratings-based approach on bank risk and to introduce risk-weighted asset floors. If well enforced, risk-sensitive capital regulation results in a more efficient credit allocation compared to the standard approach. Thus, the internal ratings-based approach should be maintained. Further, the use of internal ratings-based output floors potentially results in unintended negative side effects. Input floors are likely a valuable tool to achieve risk-weighted assets comparability. Finally, the proposed measures have a potential detrimental impact for European banks as compared to others.
Banks' financial distress, lending supply and consumption expenditure : [version december 2013]
(2014)
The paper employs a unique identification strategy that links survey data on household consumption expenditure to bank level data in order to estimate the effects of bank financial distress on consumer credit and consumption expenditures. Specifically, we show that households whose banks were more exposed to funding shocks report significantly lower levels of non-mortgage liabilities compared to a matched sample of households. The reduced access to credit, however, does not result in lower levels of consumption. Instead, we show that households compensate by drawing down liquid assets. Only households without the ability to draw on liquid assets reduce consumption. The results are consistent with consumption smoothing in the face of a temporary adverse lending supply shock. The results contrast with recent evidence on the real effects of finance on firms' investment, where even temporary adverse credit supply shocks are associated with significant real effects.
The German corporate governance system has long been cited as the standard example of an insider-controlled and stakeholder-oriented system. We argue that despite important reforms and substantial changes of individual elements of the German corporate governance system the main characteristics of the traditional German system as a whole are still in place. However, in our opinion the changing role of the big universal banks in the governance undermines the stability of the corporate governance system in Germany. Therefore a breakdown of the traditional system leading to a control vacuum or a fundamental change to a capital market-based system could be in the offing.
The creation of the Banking Union is likely to come with substantial implications for the governance of Eurozone banks. The European Central Bank, in its capacity as supervisory authority for systemically important banks, as well as the Single Resolution Board, under the EU Regulations establishing the Single Supervisory Mechanism and the Single Resolution Mechanism, have been provided with a broad mandate and corresponding powers that allow for far-reaching interference with the relevant institutions’ organisational and business decisions. Starting with an overview of the relevant powers, the present paper explores how these could – and should – be exercised against the backdrop of the fundamental policy objectives of the Banking Union. The relevant aspects directly relate to a fundamental question associated with the reallocation of the supervisory landscape, namely: Will the centralisation of supervisory powers, over time, also lead to the streamlining of business models, corporate and group structures of banks across the Eurozone?
Corporate borrowers care about the overall riskiness of a bank’s operations as their continued access to credit may rely on the bank’s ability to roll over loans or to expand existing credit facilities. As we show, a key implication of this observation is that increasing competition among banks should have an asymmetric impact on banks’ incentives to take on risk: Banks that are already riskier will take on yet more risk, while their safer rivals will become even more prudent. Our results offer new guidance for bank supervision in an increasingly competitive environment and may help to explain existing, ambiguous findings on the relationship between competition and risk-taking in banking. Furthermore, our results stress the beneficial role that competition can have for financial stability as it turns a bank’s "prudence" into an important competitive advantage.
Banking and markets
(2001)
This paper integrates a number of recent themes in the literature in banking and asset markets–optimal risk sharing, limited market participation, asset-price volatility, market liquidity, and financial crises–in a general-equilibrium theory of the financial system. A complex financial system comprises both financial markets financial institutions. Financial institutions can take the form of intermediaries or banks. Banks, inlike intermediaries, are subject to runs, but crises do not imply market failure. We show that a sophisticated financiel system–a system with complete markets for aggregate risk and limited market participation–is incentive-efficient, if the institutions take the form of intermediaries, or else constrained-efficient, of they take the form of banks. We also consider an economy in which the markets for aggregate risks are incomplete. In this context, there is a rolefpr prudential regulation: regulating liquidity can improve welfare.
We study the impact of higher capital requirements on banks’ balance sheets and its transmission to the real economy. The 2011 EBA capital exercise provides an almost ideal quasi-natural experiment, which allows us to identify the effect of higher capital requirements using a difference-in-differences matching estimator. We find that treated banks increase their capital ratios not by raising their levels of equity, but by reducing their credit supply. We also show that this reduction in credit supply results in lower firm-, investment-, and sales growth for firms which obtain a larger share of their bank credit from the treated banks.
We show that market discipline, defined as the extent to which firm specific risk characteristics are reflected in market prices, eroded during the recent financial crisis in 2008. We design a novel test of changes in market discipline based on the relation between firm specific risk characteristics and debt-to-equity hedge ratios. We find that market discipline already weakened after the rescue of Bear Stearns before disappearing almost entirely after the failure of Lehman Brothers. The effect is stronger for investment banks and large financial institutions, while there is no comparable effect for non-financial firms.
We analyze the impact of decreases in available lending resources on quantitative and qualita- tive dimensions of firms’ patenting activities. We thereby make use of the European Banking Authority?s capital exercise to carve out the causal effect of bank lending on firm innovation. In order to do so we combine various datasets to derive information on firms’ financials, their patenting behaviors, as well as their relationships with their lenders. Building on this self- generated dataset, we provide support for the “less finance, less innovation” view. At the same time, we show that lower available financial resources for firms lead to improvement in the qualitative dimensions of their patents. Hence, we carve out a “less finance, less but better innovation” pattern.
We present a network model of the interbank market in which optimizing risk averse banks lend to each other and invest in non-liquid assets. Market clearing takes place through a tâtonnement process which yields the equilibrium price, while traded quantities are determined by means of a matching algorithm. We compare three alternative matching algorithms: maximum entropy, closest matching and random matching. Contagion occurs through liquidity hoarding, interbank interlinkages and fire sale externalities. The resulting network configurations exhibits a core-periphery structure, dis-assortative behavior and low clustering coefficient. We measure systemic importance by means of network centrality and input-output metrics and the contribution of systemic risk by means of Shapley values. Within this framework we analyze the effects of prudential policies on the stability/efficiency trade-off. Liquidity requirements unequivocally decrease systemic risk but at the cost of lower efficiency (measured by aggregate investment in non-liquid assets); equity requirements tend to reduce risk (hence increase stability) without reducing significantly overall investment.
We model the impact of bank mergers on loan competition, reserve holdings and aggregate liquidity. A merger changes the distribution of liquidity shocks and creates an internal money market, leading to financial cost efficiencies and more precise estimates of liquidity needs. The merged banks may increase their reserve holdings through an internalization effect or decrease them because of a diversification effect. The merger also affects loan market competition, which in turn modifies the distribution of bank sizes and aggregate liquidity needs. Mergers among large banks tend to increase aggregate liquidity needs and thus the public provision of liquidity through monetary operations of the central bank. Klassifikation: D43, G21, G28, L13
The paper suggests an innovative contribution to the investigation of banking liabilities pricing contracted by sovereign agents. To address fundamental issues of banking, the study focuses on the determinants of the up-front fees (the up-front fee is a charge paid out at the signature of the loan arrangement). The investigation is based on a uniquely extensive sample of bank loans contracted or guaranteed by 58 less-developed countries sovereigns in the period from 1983 to 1997. The well detailed reports allow for the calculation of the equivalent yearly margin on the utilization period for all individual loan. The main findings suggest a significant impact of the renegotiation and agency costs on front-end borrowing payments. Unlike the sole interest spread, the all-in interest margin better takes account of these costs. The model estimates however suggest the non-linear pricing is hardly associated with an exogenous split-up intended by the borrower and his banker to cover up information. Instead the up-front payment is a liquidity transfer as described by Gorton and Kahn (2000) to compensate for renegotiation and monitoring costs. The second interesting result is that banks demand payment for all types of sovereign risk in an identical manner public debt holders do. The difference is that, unlike bond holders, bankers have the possibility to charge an up-front fee to compensate for renegotiation costs. Hence, beyond the information related issues, the higher complexity of the pricing design makes bank loan optimal for lenders on sovereign capital markets, especially relative to public debt, thus motivating for their presence. The paper contributes to the expanding literature on loan syndication and banking related issues. The study also has relevance for the investigation of the developing countries debt pricing.
Using novel monthly data for 226 euro-area banks from 2007 to 2015, we investigate the determinants of changes in banks’ sovereign exposures and their effects during and after the crisis. First, public, bailed out and poorly capitalized banks responded to sovereign stress by purchasing domestic public debt more than other banks, with public banks’ purchases growing especially in coincidence with the largest ECB liquidity injections. Second, bank exposures significantly amplified the transmission of risk from the sovereign and its impact on lending. This amplification of the impact on lending does not appear to arise from spurious correlation or reverse causality.
This paper studies the impact of banks’ dividend restrictions on the behavior of their institutional investors. Using an identification strategy that relies on the within investor variation and a difference in difference setup, I find that funds permanently decrease their ownership shares at treated banks during the 2020 dividend restrictions in the Eurozone and even exit treated banks’ stocks. Using data before the intro- duction of the ban reveals a positive relationship between fund ownership and banks’ dividend yield, highlighting again the importance of dividends for European banks’ fund investors. This reaction also has pricing implications since there is a negative relationship between the dividend restriction announcement day cumulative abnormal returns and the percentage of fund owners per bank.
This paper argues that banks must be sufficiently levered to have first-best incentives to make new risky loans. This result, which is at odds with the notion that leverage invariably leads to excessive risk taking, derives from two key premises that focus squarely on the role of banks as informed lenders. First, banks finance projects that they do not own, which implies that they cannot extract all the profits. Second, banks conduct a credit risk analysis before making new loans. Our model may help understand why banks take on additional unsecured debt, such as unsecured deposits and subordinated loans, over and above their existing deposit base. It may also help understand why banks and finance companies have similar leverage ratios, even though the latter are not deposit takers and hence not subject to the same regulatory capital requirements as banks.
Do current levels of bank capital in Europe suffice to support a swift recovery from the COVID-19 crisis? Recent research shows that a well-capitalized banking sector is a major factor driving the speed and breadth of recoveries from economic downturns. In particular, loan supply is negatively affected by low levels of capital. We estimate a capital shortfall in European banks of up to 600 billion euro in a severe scenario, and around 143 billion euro in a moderate scenario. We propose a precautionary recapitalization on the European level that puts the European Stability Mechanism (ESM) center stage. This proposal would cut through the sovereign-bank nexus, safeguard financial stability, and position the Eurozone for a quick recovery from the pandemic.
We analyze the risk premium on bank bonds at origination with a special focus on the role of implicit and explicit public guarantees and the systemic relevance of the issuing institutions. By looking at the asset swap spread on 5,500 bonds, we find that explicit guarantees and sovereign creditworthiness have a substantial effect on the risk premium. In addition, while large institutions still enjoy lower issuance costs linked to the TBTF framework, we find evidence of enhanced market disciple for systemically important banks which face, since the onset of the financial crisis, an increased premium on bond placements.
This study examines the relation of bank loan terms like interest rates, collateral, and lines of credit to borrower risk defined by the banks' internal credit rating. The analysis is not restricted to a static view. It also incorporates rating transition and its implications on the relation. Money illusion and phenomena linked with relationship banking are discovered as important factors. The results show that riskier borrowers pay higher loan rate premiums and rely more on bank finance. Housebanks obtain more collateral and provide more finance. Caused by money illusion in times of high market interest rates loan rate premiums are relatively small whereas in times of low market interest rates they are relatively high. There was no evidence for an appropriate adjustment of loan terms to rating changes. But bank market power represented by a weighted average of credit rating before and after a rating transition serves to compensate for low earlier profits caused by phenomena of interest rate smoothing. Klassifikation: G21.
Euro area data show a positive connection between sovereign and bank risk, which increases with banks’ and sovereign long run fragility. We build a macro model with banks subject to incentive problems and liquidity risk (in the form of liquidity based banks’ runs) which provides a link between endogenous bank capital and macro and policy risk. Our banks also invest in risky government bonds used as capital buffer to self-insure against liquidity risk. The model can replicate the positive connection between sovereign and bank risk observed in the data. Central bank liquidity policy, through full allotment policy, is successful in stabilizing the spiraling feedback loops between bank and sovereign risk.
In this paper we put forward a legal argument in favour of granting more independence to BaFin, the German securities market supervisor. Following the Wirecard scandal, our reform proposal aims at strengthening the impartiality and credibility of the German supervisor and, as a consequence, at restoring capital market integrity. In order to achieve the necessary degree of democratic legitimacy for giving BaFin more independence and disassociating it from the Ministry of Finance, the paper sets out the necessary steps for a legal reform that creates accountability of BaFin vis-à-vis the Parliament, subjecting it to strict disclosure and reporting obligations.
This article provides a proposal to use IMF Article VIII, Section 2 (b) to establish a binding mechanism on private creditors for a sovereign debt standstill. The proposal builds on the original idea by Whitney Deveboise (1984). Using arguments brought forward by confidential IMF staff papers (1988, 1996) and the IMF General Counsel (1988), this paper shows how an authoritative interpretation of Article VIII, Section 2 (b) can provide protection from litigation to countries at risk of debt distress.
The envisaged mechanism presents several advantages over recent proposals for a binding standstill mechanism, such as the International Developing Country Debt Authority (IDCDA) by UNCTAD and a Central Credit Facility (CFF) by the Bolton Committee. First, this approach would not require the creation of new intergovernmental mechanisms or facilities. Second, the activation of the standstill mechanism can be set in motion by any IMF member country and does not require a modification of its Articles of Agreement. Third, debtor countries acting in good faith under an IMF program would be protected from aggressive litigation strategies from holdout creditors in numerous jurisdictions, including the US and the UK. Fourth, courts in key jurisdictions would avoid becoming overburdened by a cascade of sovereign debt litigation covering creditors and debtors across the globe. Fifth, private creditors would receive uniform treatment and ensure intercreditor equality. Sixth and last, the mechanism would provide additional safeguards to protect emergency multilateral financing provided to tackle Covid-19.
Expectations of Sterling returning to Gold have been disregarded in empirical work on the US dollar – Sterling exchange rate in the early 1920s. We incorporate such considerations in a PPP model of the exchange rate, letting the probability of a return to gold follow a logistic function. We draw several conclusions: (i) the PPP model works well from spring 1919 to spring 1925; (ii) wholesale prices outperform consumer prices; (iii) allowing for a return to gold leads to a higher speed of adjustment of the exchange rate to PPP; (iv) interest rate differentials and the relative monetary base are crucial determinants of the expected return to gold; (v) the probability of a return to Gold peaked at about 72% in late 1924 and but fell to about 60% in early 1925; and (vi) our preferred model does not support the Keynes’ view that Sterling was overvalued after the return to gold.
Baby boomer retirement security: the roles of planning, financial literacy and housing wealth
(2006)
We compare wealth holdings across two cohorts of the Health and Retirement Study: the early Baby Boomers in 2004, and individuals in the same age group in 1992. Levels and patterns of total net worth have changed relatively little over time, though Boomers rely more on housing equity than their predecessors. Most important, planners in both cohorts arrive close to retirement with much higher wealth levels and display higher financial literacy than non-planners. Instrumental variables estimates show that planning behavior can explain the differences in savings and why some people arrive close to retirement with very little or no wealth. Klassifizierung: D91, E21
The paper documents lack of awareness of financial assets in the 1995 and 1998 Bank of Italy Surveys of Household Income and Wealth. It then explores the determinants of awareness, and finds that the probability that survey respondents are aware of stocks, mutual funds and investment accounts is positively correlated with education, household resources, long-term bank relations and proxies for social interaction. Lack of financial awareness has important implications for understanding the stockholding puzzle and for estimating stock market participation costs. Klassifikation: E2, D8, G1
Rating agencies state that they take a rating action only when it is unlikely to be reversed shortly afterwards. Based on a formal representation of the rating process, I show that such a policy provides a good explanation for the empirical evidence: Rating changes occur relatively seldom, exhibit serial dependence, and lag changes in the issuers’ default risk. In terms of informational losses, avoiding rating reversals can be more harmful than monitoring credit quality only twice per year.
Our recently developed LRSX Tool implements a technique to automatically prove the correctness of program transformations in higher-order program calculi which may permit recursive let-bindings as they occur in functional programming languages. A program transformation is correct if it preserves the observational semantics of programs- In our tool the so-called diagram method is automated by combining unification, matching, and reasoning on alpha-renamings on the higher-order metalanguage, and automating induction proofs via an encoding into termination problems of term rewrite systems. We explain the techniques, we illustrate the usage of the tool, and we report on experiments.
This paper deals with spelling normalization of historical texts with regard to further processing with modern part-of-speech taggers. Different methods for this task are presented and evaluated on a set of historical German texts from the 15th–18th century, and specific problems inherent to the processing of historical data are discussed. A chain combination using word-based and character-based techniques is shown to be best for normalization, while POS tagging of normalized data is shown to benefit from ignoring punctuation marks. Using these techniques, when 500 manually normalized tokens are used as training data for the normalization, the tagging accuracy of a manuscript from the 15th century can be raised from 28.65% to 76.27%.
This paper describes context analysis, an extension to strictness analysis for lazy functional languages. In particular it extends Wadler's four point domain and permits in nitely many abstract values. A calculus is presented based on abstract reduction which given the abstract values for the result automatically finds the abstract values for the arguments. The results of the analysis are useful for veri fication purposes and can also be used in compilers which require strictness information.
The article is designed to introduce and analyze authoritarian constitutionalism as an important phenomenon in its own right, not merely a deficient or deviant version of liberal constitutionalism. Therefore it is not adequate to dismiss it as sham or window-dressing. Instead, its crucial features – participation as complicity, power as property and the cult of immediacy – are related to the basic assumption that authoritarian constitutions are texts with a purpose that warrant careful analysis of the domestic and transnational audience.
The papers in this volume were presented at the eleventh meeting of the Austronesian Formal Linguistics Association (AFLA 11), held from April 23-25 at the Zentrum für Allgemeine Sprachwissenschaft, Berlin, Germany. The conference was organized by Hans-Martin Gärtner, Joachim Sabel, and myself, as part of the research project Clause Structure and Adjuncts in Austronesian Languages. We gratefully acknowledge the financial support by the German Research Foundation (Deutsche Forschungsgemeinschaft). We would like to thank Wayan Arka, Agibail Cohn, Laura Downing, Silke Hamann, S J Hannahs, Ray Harlow, Nikolaus Himmelmann, Yuchua E. Hsiao, Lillian Huang, Ed Keenan, Glyne Piggott, Charles Randriamasimanana, Joszef Szakos, Barbara Stiebels, Jane Tang, Lisa Travis, Noami Tsukido, Sam Wang, Elizabeth Zeitoun, Kie Ross Zuraw, and Marzena Zygis for reviewing the abstracts. We are thankful to Mechthild Bernhard, Jenny Ehrhardt, Fabienne Fritzsche, Theódóra Torfadóttir and Tue Trinh for their help during the conference. I would like to thank Theódóra for providing essential editorial assistance.
This paper contributes to the ongoing debate on the relationship between austerity measures and economic growth. We propose a general equilibrium model where (i) agents have recursive preferences; (ii) economic growth is endogenously driven by investments in R&D; (iii) the government is committed to a zero-deficit policy and finances public expenditures by means of a combination of labor taxes and R&D taxes. We find that austerity measures that rely on reducing resources available to the R&D sector depress economic growth both in the short- and long-run. High debt EU members are currently implementing austerity measures based on higher taxes and/or lower investments in the R&D sector. This casts some doubts on the real ability of these countries to grow over the next years.
Against the background of the European debt crisis, the Research Center SAFE, in the fall of 2013, had issued a call for papers on the topic “Austerity and Economic Growth: Concepts for Europe”, with the objective of soliciting research proposals focusing on the nature of the relationship between austerity, debt sustainability and growth. Each of the five funded projects brought forth an academic paper and a shortened, non-technical policy brief. These policy papers are presented in the present collection of policy letters, edited by Alfons Weichenrieder.
The first paper by Alberto Alesina, Carlo Favero and Francesco Giavazzi looks into the question of how fiscal consolidations influence the real economy. Harris Dellas and Dirk Niepelt emphasize that fiscal austerity is a signal that investors use to tell apart governments with high and low default costs that accordingly will have a high or low probability of repayment.The paper by Benjamin Born, Gernot Müller and Johannes Pfeiffer,looks at the impact of austerity measures on government bond spreads. Oscar Jorda and Alan M. Taylor, in the fourth contribution, put into question whether the narrative records of fiscal consolidation plans are really exogenous. The final study by Enrique Mendoza, Linda Tesar and Jing Zhang suggests that fiscal consolidation should largely depend on expenditure cuts, rather than tax increases that may fail, when fiscal space is exhausted.
Austerity
(2014)
We shed light on the function, properties and optimal size of austerity using the standard sovereign model augmented to include incomplete information about credit risk. Austerity is defined as the shortfall of consumption from the level desired by a country and supported by its repayment capacity. We find that austerity serves as a tool for securing a more favorable loan package; that it is associated with over-investment even when investment does not create collateral; and that low risk borrowers may favor more to less severe austerity. These findings imply that the amount of fresh funds obtained by a sovereign is not a reliable measure of austerity suffered; and that austerity may actually be associated with higher growth. Our analysis accommodates costly signalling for gaining credibility and also assigns a novel role to spending multipliers in the determination of optimal austerity.
This paper examines auditor liability rules under imperfect information, costly litigation and risk averse auditors. A negligence rule fails in such a setting, because in equilibrium auditors will deviate with positive probability from any given standard. It is shown that strict liability outperforms negligence with respect to risk allocation, and the probability that a desired level of care is met by the audi tor if competitive liability insurance markets exist. Furthermore, our model explains the existence of insurance contracts containing obligations - a type of contract often observed in liability insurance markets.
Credit boom detection methodologies (such as threshold method) lack robustness as they are based on univariate detrending analysis and resort to ratios of credit to real activity. I propose a quantitative indicator to detect atypical behavior of credit from a multivariate system - a monetary VAR. This methodology explicitly accounts for endogenous interactions between credit, asset prices and real activity and detects atypical credit expansions and contractions in the Euro Area, Japan and the U.S. robustly and timely. The analysis also proves useful in real time.
Asymmetric social norms
(2017)
Studies of cooperation in infinitely repeated matching games focus on homogeneous economies, where full cooperation is efficient and any defection is collectively sanctioned. Here we study heterogeneous economies where occasional defections are part of efficient play, and show how to support those outcomes through contagious punishments.
An asymmetric multivariate generalization of the recently proposed class of normal mixture GARCH models is developed. Issues of parametrization and estimation are discussed. Conditions for covariance stationarity and the existence of the fourth moment are derived, and expressions for the dynamic correlation structure of the process are provided. In an application to stock market returns, it is shown that the disaggregation of the conditional (co)variance process generated by the model provides substantial intuition. Moreover, the model exhibits a strong performance in calculating out–of–sample Value–at–Risk measures.
Based on a unique data set of driving behavior we find direct evidence that private information has significant effects on contract choice and risk in automobile insurance. The number of car rides and the relative distance driven on weekends are significant risk factors. While the number of car rides and average speeding are negatively related to the level of liability coverage, the number of car rides and the relative distance driven at night are positively related to the level of first-party coverage. These results indicate multiple and counteracting effects of private information based on risk preferences and driving behavior.
Until the late 1980s, asset securitisation was an US-American finance technique. Meanwhile this technique has been used also in some European countries, although to a much lesser extent. While some of them have adopted or developed their legal and regulatory framework, others remain on earlier stages. That may be because of the lack of economic incentives, but also because of remaining regulatory or legal impediments. The following overview deals with the legal and regulatory environment in five selected European countries. It is structured as follows: First, this finance technique will be described in outline to the benefit of the reader who might not be familiar with it. A further part will report the recent development and the underlying economic reasons that drive this development. The main part will then deal with international aspects and give an overview of some legal and regulatory issues in five European legislations. Tax and accounting questions are, however, excluded. Concluding remarks follow.
We analyze the equilibrium in a two-tree (sector) economy with two regimes. The output of each tree is driven by a jump-diffusion process, and a downward jump in one sector of the economy can (but need not) trigger a shift to a regime where the likelihood of future jumps is generally higher. Furthermore, the true regime is unobservable, so that the representative Epstein-Zin investor has to extract the probability of being in a certain regime from the data. These two channels help us to match the stylized facts of countercyclical and excessive return volatilities and correlations between sectors. Moreover, the model reproduces the predictability of stock returns in the data without generating consumption growth predictability. The uncertainty about the state also reduces the slope of the term structure of equity. We document that heterogeneity between the two sectors with respect to shock propagation risk can lead to highly persistent aggregate price-dividend ratios. Finally, the possibility of jumps in one sector triggering higher overall jump probabilities boosts jump risk premia while uncertainty about the regime is the reason for sizeable diffusive risk premia.
This paper proposes a new approach for modeling investor fear after rare disasters. The key element is to take into account that investors’ information about fundamentals driving rare downward jumps in the dividend process is not perfect. Bayesian learning implies that beliefs about the likelihood of rare disasters drop to a much more pessimistic level once a disaster has occurred. Such a shift in beliefs can trigger massive declines in price-dividend ratios. Pessimistic beliefs persist for some time. Thus, belief dynamics are a source of apparent excess volatility relative to a rational expectations benchmark. Due to the low frequency of disasters, even an infinitely-lived investor will remain uncertain about the exact probability. Our analysis is conducted in continuous time and offers closed-form solutions for asset prices. We distinguish between rational and adaptive Bayesian learning. Rational learners account for the possibility of future changes in beliefs in determining their demand for risky assets, while adaptive learners take beliefs as given. Thus, risky assets tend to be lower-valued and price-dividend ratios vary less under adaptive versus rational learning for identical priors. Keywords: beliefs, Bayesian learning, controlled diffusions and jump processes, learning about jumps, adaptive learning, rational learning. JEL classification: D83, G11, C11, D91, E21, D81, C61
This paper analyzes how the combination of borrowing constraints and idiosyncratic risk affects the equity premium in an overlapping generations economy. I find that introducing a zero-borrowing constraint in an economy without idiosyncratic risk increases the equity premium by 70 percent, which means that the mechanism described in Constantinides, Donaldson, and Mehra (2002) is dampened because of the large number of generations and production. With social security the effect of the zero-borrowing constraint is a lot weaker. More surprisingly, when I introduce idiosyncratic labor income risk in an economy without a zero-borrowing constraint, the equity premium increases by 50 percent, even though the income shocks are independent of aggregate risk and are not permanent. The reason is that idiosyncratic risk makes the endogenous natural borrowing limits much tighter, so that they have a similar effect to an exogenously imposed zero-borrowing constraint. This intuition is confirmed when I add idiosyncratic risk in an economy with a zero-borrowing constraint: neither the equity premium nor the Sharpe ratio change, because the zero-borrowing constraint is already tighter than the natural borrowing limits that result when idiosyncratic risk is added.
We study consumption-portfolio and asset pricing frameworks with recursive preferences and unspanned risk. We show that in both cases, portfolio choice and asset pricing, the value function of the investor/representative agent can be characterized by a specific semilinear partial differential equation. To date, the solution to this equation has mostly been approximated by Campbell-Shiller techniques, without addressing general issues of existence and uniqueness. We develop a novel approach that rigorously constructs the solution by a fixed point argument. We prove that under regularity conditions a solution exists and establish a fast and accurate numerical method to solve consumption-portfolio and asset pricing problems with recursive preferences and unspanned risk. Our setting is not restricted to affine asset price dynamics. Numerical examples illustrate our approach.
In this paper, we study the effect of proportional transaction costs on consumption-portfolio decisions and asset prices in a dynamic general equilibrium economy with a financial market that has a single-period bond and two risky stocks, one of which incurs the transaction cost. Our model has multiple investors with stochastic labor income, heterogeneous beliefs, and heterogeneous Epstein-Zin-Weil utility functions. The transaction cost gives rise to endogenous variations in liquidity. We show how equilibrium in this incomplete-markets economy can be characterized and solved for in a recursive fashion. We have three main findings. One, costs for trading a stock lead to a substantial reduction in the trading volume of that stock, but have only a small effect on the trading volume of the other stock and the bond. Two, even in the presence of stochastic labor income and heterogeneous beliefs, transaction costs have only a small effect on the consumption decisions of investors, and hence, on equity risk premia and the liquidity premium. Three, the effects of transaction costs on quantities such as the liquidity premium are overestimated in partial equilibrium relative to general equilibrium.
This paper analyzes the relation between demographic structure and real asset returns on treasury bills, bonds and stocks for the G7-countries (United States, Canada, Japan, Italy, France, the United Kingdom and Germany). A macroeconomic multifactor model is used to examine a variety of different demographic factors from 1951 to 2002. There was no robust relationship found between shocks in demographic variables and asset returns in the framework of these models, which suggests that Asset Meltdown is rather fiction than fact.
We study the life cycle of portfolio allocation following for 15 years a large random sample of Norwegian households using error-free data on all components of households’ investments drawn from the Tax Registry. Both, participation in the stock market and the portfolio share in stocks, have important life cycle patterns. Participation is limited at all ages but follows a hump-shaped profile which peaks around retirement; the share invested in stocks among the participants is high and flat for the young but investors start reducing it as retirement comes into sight. Our data suggest a double adjustment as people age: a rebalancing of the portfolio away from stocks as they approach retirement, and stock market exit after retirement. Existing calibrated life cycle models can account for the first behavior but not the second. We show that incorporating in these models a reasonable per period participation cost can generate limited participation among the young but not enough exit from the stock market among the elderly. Adding also a small probability of a large loss when investing in stocks, produces a joint pattern of participation and of the risky asset share that resembles the one observed in the data. A structural estimation of the relevant parameters that target simultaneously the portfolio, participation and asset accumulation age profiles of the model reveals that the parameter combination that fits the data best is one with a relatively large risk aversion, small participation cost and a yearly large loss probability in line with the frequency of stock market crashes in Norway.
Historical evidence like the global financial crisis from 2007-09 highlights that sector concentration risk can play an important role for the solvency of insurers. However, current microprudential frameworks like the US RBC framework and Solvency II consider only name concentration risk explicitly in their solvency capital requirements for asset concentration risk and neglect sector concentration risk. We show by means of US insurers’ asset holdings from 2009 to 2018 that substantial sectoral asset concentrations exist in the financial, public and real estate sector, and find indicative evidence for a sectoral search for yield behavior. Based on a theoretical solvency capital allocation scheme, we demonstrate that the current regulatory approaches can lead to inappropriate and biased levels of solvency capital for asset concentration risk, and should be revised. Our findings have also important implications on the ongoing discussion of asset concentration risk in the context of macroprudential insurance regulation.
This assessment concept paper provides a methodological approach for the formative assessment and summative assessment of GIZ’s International Water Stewardship Programme (IWaSP) and its component partnerships. IWaSP promotes partnerships between the private sector (corporations and SMEs), the public sector and the society to tackle shared water risks and to manage water equitably to meet competing demands. This evaluative assessment concept describes the generic approach of the assessment, the cycle for the assessment of partnerships, the country coordination and the programme.
The overall goal of the assessment is to provide evidence for taxpayers in the donor countries and for citizens in the partnership countries. It also aims to examine the relevance of the programme’s approach, its underlying assumptions, and the heterogeneity of stakeholders and their specific interests. Since the assessment is also formative feedback to GIZ and IWaSP stakeholders, it aims to guide the future implementation of the partnerships and the programme.
The assessment is guided by several generic principles: assessing for learning (formative assessment); assessment of learning (summative assessment); iteration; structuring complex problems; unblocking results; and conformity with other assessment criteria set out by the OECD the Development Assistance Committee (DAC) and GIZ’s Capacity Works success factors (GTZ 2010).
These generic criteria are adapted to the three levels of the IWaSP structure. First, the assessment cycle for partnerships includes the validation of stakeholders (mapping), the analysis of secondary literature, face-to-face interviews and a process for feeding back the findings. Generic tools are provided to guide the assessment, such as a list of key documents and an interview guide. Partnerships will undergo a baseline, interim assessment and final assessment. As progress varies across individual IWaSP partnerships, the steps taken by each partnership to assess shared water risks, prioritise and agree interventions, are expected to differ slightly. In response to these differences the sequencing and content of the assessment may need to be adapted for the different partnerships.
Second, the country-level assessment considers issues such as the coordination of partnerships within a country, scoping strategies, and interaction between partnership and the programme. Information gathered during the partnership assessment feeds into the country-level assessment.
Third, the assessment cycle for the programme involves a document and monitoring plan analysis, reflection on the different perspectives of the programme staff, country staff and external stakeholders.
The final section is concerned with reporting. Several annexes are provided relating to the organisation and preparation of the assessment, including question guidelines and analysis procedures.
Innovative automated execution strategies like Algorithmic Trading gain significant market share on electronic market venues worldwide, although their impact on market outcome has not been investigated in depth yet. In order to assess the impact of such concepts, e.g. effects on the price formation or the volatility of prices, a simulation environment is presented that provides stylized implementations of algorithmic trading behavior and allows for modeling latency. As simulations allow for reproducing exactly the same basic situation, an assessment of the impact of algorithmic trading models can be conducted by comparing different simulation runs including and excluding a trader constituting an algorithmic trading model in its trading behavior. By this means the impact of Algorithmic Trading on different characteristics of market outcome can be assessed. The results indicate that large volumes to execute by the algorithmic trader have an increasing impact on market prices. On the other hand, lower latency appears to lower market volatility.
We outline a procedure for consistent estimation of marginal and joint default risk in the euro area financial system. We interpret the latter risk as the intrinsic financial system fragility and derive several systemic fragility indicators for euro area banks and sovereigns, based on CDS prices. Our analysis documents that although the fragility of the euro area banking system had started to deteriorate before Lehman Brothers' file for bankruptcy, investors did not expect the crisis to affect euro area sovereigns' solvency until September 2008. Since then, and especially after November 2009, joint sovereign default risk has outpaced the rise of systemic risk within the banking system.
Financial markets embed expectations of central bank policy into asset prices. This paper compares two approaches that extract a probability density of market beliefs. The first is a simulatedmoments estimator for option volatilities described in Mizrach (2002); the second is a new approach developed by Haas, Mittnik and Paolella (2004a) for fat-tailed conditionally heteroskedastic time series. In an application to the 1992-93 European Exchange Rate Mechanism crises, that both the options and the underlying exchange rates provide useful information for policy makers. JEL Klassifikation: G12, G14, F31.
In more and more situations, artificially intelligent algorithms have to model humans’ (social) preferences on whose behalf they increasingly make decisions. They can learn these preferences through the repeated observation of human behavior in social encounters. In such a context, do individuals adjust the selfishness or prosociality of their behavior when it is common knowledge that their actions produce various externalities through the training of an algorithm? In an online experiment, we let participants’ choices in dictator games train an algorithm. Thereby, they create an externality on future decision making of an intelligent system that affects future participants. We show that individuals who are aware of the consequences of their training on the pay- offs of a future generation behave more prosocially, but only when they bear the risk of being harmed themselves by future algorithmic choices. In that case, the externality of artificially intelligence training induces a significantly higher share of egalitarian decisions in the present.
With Big Data, decisions made by machine learning algorithms depend on training data generated by many individuals. In an experiment, we identify the effect of varying individual responsibility for the moral choices of an artificially intelligent algorithm. Across treatments, we manipulated the sources of training data and thus the impact of each individual’s decisions on the algorithm. Diffusing such individual pivotality for algorithmic choices increased the share of selfish decisions and weakened revealed prosocial preferences. This does not result from a change in the structure of incentives. Rather, our results show that Big Data offers an excuse for selfish behavior through lower responsibility for one’s and others’ fate.
This work investigates laryngeal and supralaryngeal correlates of the voicing contrast in alveolar obstruent production in German. It further studies laryngealoral co-ordination observed for such productions. Three different positions of the obstruents are taken into account: the stressed, syllable initial position, the post-stressed intervocalic position, and the post-stressed word final position. For the latter the phonological rule of final devoicing applies in German. The different positions are chosen in order to study the following hypotheses:
1. The presence/absence of glottal opening is not a consistent correlate of the voicing contrast in German.
2. Supralaryngeal correlates are also involved in the contrast.
3. Supralaryngeal correlates can compensate for the lack of distinction in laryngeal adjustment.
Including the word final position is motivated by the question whether neutralization in word final position would be complete or whether some articulatory residue of the contrast can be found.
Two experiments are carried out. The first experiment investigates glottal abduction in co-ordination with tongue-palate contact patterns by means of simultaneous recordings of transillumination, fiberoptic films and Electropalatography (EPG). The second experiment focuses on supralaryngeal correlates of alveolar stops studied by means of Electromagnetic Articulography (EMA) simultaneously with EPG. Three German native speakers participated in both recordings. Results of this study provide evidence that the first hypothesis holds true for alveolar stops when different positions are taken into account. In fricative production it is also confirmed since voiceless and voiced fricatives are most of the time realised with glottal abduction. Additionally, supralaryngeal correlates are involved in the voicing contrast under two perspectives. First, laryngeal and supralaryngeal movements are well synchronised in voiceless obstruent production, particularly in the stressed position. Second, supralaryngeal correlates occur especially in the post-stressed intervocalic position. Results are discussed with respect to the phonetics-phonology interface, to the role of timing and its possible control, to the interarticulatory co-ordination, and to stress as 'localised hyperarticulation'.
We provide a novel benefit of "Alternative Risk Transfer" (ART) products with parametric or index triggers. When a reinsurer has private information about his client's risk, outside reinsurers will price their reinsurance offer less aggressively. Outsiders are subject to adverse selection as only a high-risk insurer might find it optimal to change reinsurers. This creates a hold-up problem that allows the incumbent to extract an information rent. An information-insensitive ART product with a parametric or index trigger is not subject to adverse selection. It can therefore be used to compete against an informed reinsurer, thereby reducing the premium that a low-risk insurer has to pay for the indemnity contract. However, ART products exhibit an interesting fate in our model as they are useful, but not used in equilibrium because of basis-risk. Klassifikation: D82, G22
Employing the art-collection records of Burton and Emily Hall Tremaine, we consider whether early-stage art investors can be understood as venture capitalists. Because the Tremaines bought artists’ work very close to an artwork’s creation, with 69% of works in our study purchased within one year of the year when they were made, their collecting practice can best be framed as venture-capital investment in art. The Tremaines also illustrate art collecting as social-impact investment, owing to their combined strategy of art sales and museum donations for which the collectors received a tax credit under US rules. Because the Tremaines’ museum donations took place at a time that U.S. marginal tax rates from 70% to 91%, the near “donation parity” with markets, creating a parallel to ESG investment in the management of multiple forms of value.
This chapter analyzes the risk and return characteristics of investments in artists from the Middle East and Northern Africa (MENA) region over the sample period 2000 to 2012. With hedonic regression modeling we create an annual index that is based on 3,544 paintings created by 663 MENA artists. Our empirical results prove that investing in such a hypothetical index provides strong financial returns. While the results show an exponential growth in sales since 2006, the geometric annual return of the MENA art index is a stable13.9 percent over the whole period. We conclude that investing in MENA paintings would have been profitable but also note that we examined the performance of an emerging art market that has only seen an upward trend without any correction, yet.
The pressure on tax haven countries to engage in tax information exchange shows first effects on capital markets. Empirical research suggests that investors do react to information exchange and partially withdraw from previous secrecy jurisdictions that open up to information exchange. While some of the economic literature emphasizes possible positive effects of tax havens, the present paper argues that proponents of positive effects may have started from questionable premises, in particular when it comes to the effects that tax havens have for emerging markets like China and India.
The issuance of sustainability-linked loans (SLLs) has grown exponentially in recent years. Using a scoring methodology, we examine the underlying key performance indicators of a large sample of SLLs and analyze whether their design creates effective incentives for improving corporate sustainability performance. We demonstrate that the majority of loans fails to meet key requirements that would make them credible instruments for generating effective sustainability incentives. These findings call into question the actual sustainability impact that may be achieved through the issuance of ESG-linked debt.
Earlier studies of the seigniorage inflation model have found that the high-inflation steady state is not stable under adaptive learning. We reconsider this issue and analyze the full set of solutions for the linearized model. Our main focus is on stationary hyperinflationary paths near the high-inflation steady state. The hyperinflationary paths are stable under learning if agents can utilize contemporaneous data. However, in an economy populated by a mixture of agents, some of whom only have access to lagged data, stable inflationary paths emerge only if the proportion of agents with access to contemporaneous data is sufficiently high. JEL Klassifikation: C62, D83, D84, E31
Are rules and boundaries sufficient to limit harmful central bank discretion? Lessons from Europe
(2014)
Marvin Goodfriend’s (2014) insightful, informative and provocative work explains concisely and convincingly why the Fed needs rules and boundaries. This paper reviews the broader institutional design problem regarding the effectiveness of the central bank in practice and confirms the need for rules and boundaries. The framework proposed for improving the Fed incorporates key elements that have already been adopted in the European Union. The case of ELA provision by the ECB and the Central Bank of Cyprus to Marfin-Laiki Bank during the crisis, however, suggests that the existence of rules and boundaries may not be enough to limit harmful discretion. During a crisis, novel interpretations of the legal authority of the central bank may be introduced to create a grey area that might be exploited to justify harmful discretionary decisions even in the presence of rules and boundaries. This raises the question how to ensure that rules and boundaries are respected in practice
This paper investigates whether preference interactions can explain why risk preferences change over time and across contexts. We conduct an experiment in which subjects accept or reject gambles involving real money gains and losses. We introduce within-subject variation by alternating subjectively liked music and disliked music in the background. We find that favourite music increases risk-taking, and disliked music suppresses risk-taking, compared to a baseline of no music. Several theories in psychology propose mechanisms by which mood affects risktaking, but none of them fully explain our results. The results are, however, consistent with preference complementarities that extend to risk preference.
Are product spreads useful for forecasting? An empirical evaluation of the Verleger hypothesis
(2013)
Notwithstanding a resurgence in research on out-of-sample forecasts of the price of oil in recent years, there is one important approach to forecasting the real price of oil which has not been studied systematically to date. This approach is based on the premise that demand for crude oil derives from the demand for refined products such as gasoline or heating oil. Oil industry analysts such as Philip Verleger and financial analysts widely believe that there is predictive power in the product spread, defined as the difference between suitably weighted refined product market prices and the price of crude oil. Our objective is to evaluate this proposition. We derive from first principles a number of alternative forecasting model specifications involving product spreads and compare these models to the no-change forecast of the real price of oil. We show that not all product spread models are useful for out-of-sample forecasting, but some models are, even at horizons between one and two years. The most accurate model is a time-varying parameter model of gasoline and heating oil spot spreads that allows the marginal product market to change over time. We document MSPE reductions as high as 20% and directional accuracy as high as 63% at the two-year horizon, making product spread models a good complement to forecasting models based on economic fundamentals, which work best at short horizons.
The objective of this study is to determine whether specific industries across countries or within countries are more likely to reach a stage of profitability and make a successful exit. In particular, we assess whether firms in certain industries are more prone to exit via IPO, be acquired, or exit through a leveraged buy-out. We are also interested in analyzing whether substantial differences across industries and countries arise when looking separately at the success’ rate of firms which have received venture funding at the early seed and start-up stages, vis-à-vis firms that received funding at later stages. Our results suggest that, inasmuch as some of the differences in performance can be explained by country-specific factors, there are also important idiosyncratic differences across industries: In particular, firms in the biotech and the medical / health / life science sectors tend to be significantly more likely to have a successful exit via IPO, while firms in the computer industry and communications and media are more prone to exit via merger or acquisition. Key differences across industries also emerge when considering infant versus mature firms, and their preferred exit. JEL Classification: G24, G3 Keywords:
This paper aims to analyze the impact of different types of venture capitalists on the performance of their portfolio firms around and after the IPO. We thereby investigate the hypothesis that different governance structures, objectives and track record of different types of VCs have a significant impact on their respective IPOs. We explore this hypothesis by using a data set embracing all IPOs which occurred on Germany's Neuer Markt. Our main finding is that significant differences among the different VCs exist. Firms backed by independent VCs perform significantly better two years after the IPO compared to all other IPOs and their share prices fluctuate less than those of their counterparts in this period of time. Obviously, independent VCs, which concentrated mainly on growth stocks (low book-to-market ratio) and large firms (high market value), were able to add value by leading to less post-IPO idiosyncratic risk and more return (after controlling for all other effects). On the contrary, firms backed by public VCs (being small and having a high book-to-market ratio) showed relative underperformance. Klassifikation: G10, G14, G24 . 29th January 2004 .
This paper sets out to analyze the influence of different types of venture capitalists on the performance of their portfolio firms around and after IPO. We investigate the hypothesis that different governance structures, objectives, and track records of different types of VCs have a significant impact on their respective IPOs. We explore this hypothesis using a data set embracing all IPOs that have occurred on Germany's Neuer Markt. Our main finding is that significant differences among the different VCs exist. Firms backed by independent VCs perform significantly better two years after IPO as compared to all other IPOs, and their share prices fluctuate less than those of their counterparts in this period of time. On the contrary, firms backed by public VCs show relative underperformance. The fact that this could occur implies that market participants did not correctly assess the role played by different types of VCs.
It is well known that artificial neural nets can be used as approximators of any continous functions to any desired degree. Nevertheless, for a given application and a given network architecture the non-trivial task rests to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation. In this paper the problem is treated by an information theoretic approach. The values for the weights and thresholds in the approximator network are determined analytically. Furthermore, the accuracy of the weights and the number of neurons are seen as general system parameters which determine the the maximal output information (i.e. the approximation error) by the absolute amount and the relative distribution of information contained in the network. A new principle of optimal information distribution is proposed and the conditions for the optimal system parameters are derived. For the simple, instructive example of a linear approximation of a non-linear, quadratic function, the principle of optimal information distribution gives the the optimal system parameters, i.e. the number of neurons and the different resolutions of the variables.
Approaching the grammar of adjuncts : proceedings of the Oslo conference, September 22 - 25, 1999
(2000)
This paper identifies some common errors that occur in comparative law, offers some guidelines to help avoid such errors, and provides a framework for entering into studies of the company laws of three major jurisdictions. The first section illustrates why a conscious approach to comparative company law is useful. Part I discusses some of the problems that can arise in comparative law and offers a few points of caution that can be useful for practical, theoretical and legislative comparative law. Part II discusses some relatively famous examples of comparative analysis gone astray in order to demonstrate the utility of heeding the outlined points of caution. The second section offers a framework for approaching comparative company law. Part III provides an example of using functional definition to demarcate the topic "company law", offering an "effects" test to determine whether a given provision of law should be considered as functionally part of the rules that govern the core characteristics of companies. It does this by presenting the relevant company law statutes and related topical laws of Germany, the United Kingdom and the United States, using Delaware as a proxy for the 50 states. On the basis of this definition, Part IV analyzes the system of legal functions that comprises "company law" in the United States and the European Union. It selects as the predominant factor for consideration the jurisdictions, sub-jurisdictions and rule-making entities that have legislative or rule-making competence in the relevant territorial unit, analyzes the extent of their power, presents the type of law (rules) they enact (issue), and discusses the concrete manner in which the laws and rules of the jurisdictions and sub-jurisdictions can legally interact. Part V looks at the way these jurisdictions do interact on the temporal axis of history, that is, their actual influence on each other, which in the relevant jurisdictions currently takes the form of regulatory competition and legislative harmonization. The method of the approach outlined in this paper borrows much from system theory. The analysis attempts to be detailed without losing track of the overall jurisdictional framework in the countries studied.
Motivated by the question whether sound and expressive applicative similarities for program calculi with should-convergence exist, this paper investigates expressive applicative similarities for the untyped call-by-value lambda-calculus extended with McCarthy's ambiguous choice operator amb. Soundness of the applicative similarities w.r.t. contextual equivalence based on may-and should-convergence is proved by adapting Howe's method to should-convergence. As usual for nondeterministic calculi, similarity is not complete w.r.t. contextual equivalence which requires a rather complex counter example as a witness. Also the call-by-value lambda-calculus with the weaker nondeterministic construct erratic choice is analyzed and sound applicative similarities are provided. This justifies the expectation that also for more expressive and call-by-need higher-order calculi there are sound and powerful similarities for should-convergence.
Traditional least squares estimates of the responsiveness of gasoline consumption to changes in gasoline prices are biased toward zero, given the endogeneity of gasoline prices. A seemingly natural solution to this problem is to instrument for gasoline prices using gasoline taxes, but this approach tends to yield implausibly large price elasticities. We demonstrate that anticipatory behavior provides an important explanation for this result. We provide evidence that gasoline buyers increase gasoline purchases before tax increases and delay gasoline purchases before tax decreases. This intertemporal substitution renders the tax instrument endogenous, invalidating conventional IV analysis. We show that including suitable leads and lags in the regression restores the validity of the IV estimator, resulting in much lower and more plausible elasticity estimates. Our analysis has implications more broadly for the IV analysis of markets in which buyers may store purchases for future consumption.
Substantial research attention has been devoted to the pension accumulation process, whereby employees and those advising them work to accumulate funds for retirement. Until recently, less analysis has been devoted to the pension decumulation process – the process by which retirees finance their consumption during retirement. This gap has recently begun to be filled by an active group of researchers examining key aspects of the pension payout market. One of the areas of most interesting investigation has been in the area of annuities, which are financial products intended to cover the risk of retirees outliving their assets. This paper reviews and extends recent research examining the role of annuities in helping finance retirement consumption. We also examine key market and regulatory factors.
We investigate the relationship between anchoring and the emergence of bubbles in experimental asset markets. We show that setting a visual anchor at the fundamental value (FV) in the first period only is sufficient to eliminate or to significantly reduce bubbles in laboratory asset markets. If no FV-anchor is set, bubble-crash patterns emerge. Our results indicate that bubbles in laboratory environments are primarily sparked in the first period. If prices are initiated around the FV, they stay close to the FV over the entire trading horizon. Our insights can be related to initial public offerings and the interaction between prices set on pre-opening markets and subsequent intra-day price dynamics.
This paper constructs a dynamic model of health insurance to evaluate the short- and long run effects of policies that prevent firms from conditioning wages on health conditions of their workers, and that prevent health insurance companies from charging individuals with adverse health conditions higher insurance premia. Our study is motivated by recent US legislation that has tightened regulations on wage discrimination against workers with poorer health status (Americans with Disability Act of 2009, ADA, and ADA Amendments Act of 2008, ADAAA) and that will prohibit health insurance companies from charging different premiums for workers of different health status starting in 2014 (Patient Protection and Affordable Care Act, PPACA). In the model, a trade-off arises between the static gains from better insurance against poor health induced by these policies and their adverse dynamic incentive effects on household efforts to lead a healthy life. Using household panel data from the PSID we estimate and calibrate the model and then use it to evaluate the static and dynamic consequences of no-wage discrimination and no-prior conditions laws for the evolution of the cross-sectional health and consumption distribution of a cohort of households, as well as ex-ante lifetime utility of a typical member of this cohort. In our quantitative analysis we find that although a combination of both policies is effective in providing full consumption insurance period by period, it is suboptimal to introduce both policies jointly since such policy innovation induces a more rapid deterioration of the cohort health distribution over time. This is due to the fact that combination of both laws severely undermines the incentives to lead healthier lives. The resulting negative effects on health outcomes in society more than offset the static gains from better consumption insurance so that expected discounted lifetime utility is lower under both policies, relative to only implementing wage nondiscrimination legislation.
Analyzing interest rate risk: stochastic volatility in the term structure of government bond yields
(2009)
We propose a Nelson-Siegel type interest rate term structure model where the underlying yield factors follow autoregressive processes with stochastic volatility. The factor volatilities parsimoniously capture risk inherent to the term structure and are associated with the time-varying uncertainty of the yield curve’s level, slope and curvature. Estimating the model based on U.S. government bond yields applying Markov chain Monte Carlo techniques we find that the factor volatilities follow highly persistent processes. We show that slope and curvature risk have explanatory power for bond excess returns and illustrate that the yield and volatility factors are closely related to industrial capacity utilization, inflation, monetary policy and employment growth. JEL Classification: C5, E4, G1
In recent econometric work, most analyses of female labour supply consider married women, whereas the results for unmarried women are provided rather as a by-product (Burtless/Greenberg, 1982, Johnson/Pencavel, 1984, Leu/Kugler, 1986, Merz, 1990,). When the particular interest is focused on unmarried women, data of the seventies or rather simple econometric models are used (Keeley et al., 1978, Hausman, 1980, Coverman/Kemp, 1987) . Often very specific populations are examined, like for example lone mothers in Blundell/Duncan/Meghir (1992), Jenkins (1992), Staat/Wagenhals (1993) or Laisney et al. (1993). Analysing the economic behaviour of unmarried women, one is confronted with the problem that the term ‘unmarried’ is not clearly defined. It includes single, divorced, separated and widowed women. They live in different types of households, like one-person households or family households, where they occupy different economic positions as for example head of the household or relative of the head. The present work considers unmarried female heads of household. We assume that the dominant economic position as head of household, voluntarily or involuntarily occupied, forces these women to a similar behaviour independent from their family status. Thus they are taken together in the analysis from the different family statuses: single, divorced, separated and widowed. Being unmarried often is regarded as a temporary state, voluntarily or involuntarily, for example in the case of young women before marriage or in the case of divorced women after their separation. Nevertheless the demographic development shows the increased importance of unmarried women in the population during the last decades. In the USA the portion of female headed households raised from 21,1% in 1970 to 26,2% in 1980 and 29,0% in 1992 (Statistical Abstracts of the United States, 1993. Own calculations). In the FRG, female headed households constitute 26,4% of total households in 1970, 27,4% in 1980 and 30,1% in 1992 (Stat.Bundesamt, FS 1, Reihe 3, 1970, 1980, 1992). Therefore it seems an interesting topic to analyse the labour supply behaviour of unmarried female heads. Especially the question whether the labour supply of unmarried women resembles rather that of married women or of prime-age males is of particular interest. Another purpose of this analysis is to apply modern econometric panel data models with special emphasis on the problem of unbalanced panel data. Most panel data analyses are carried out using balanced panel data, which is no problem if the selection process could be ignored and if enough cases are available to guarantee efficient estimation. Especially the last point was crucial for the present analysis of unmarried females. In the available panel data sets the unmarried female heads constitute only a rather small population. Therefore the estimation techniques were modified to take missing observations of the individuals into account. The paper is organized as follows: In section 2 the underlying theoretical model of intertemporal labour supply under uncertainty is shortly presented. Section 3 deals with the econometric specification and estimation techniques where the use of unbalanced panel data is considered. Section 4 contains the data description with a particular look on the unbalancedness of the samples. In the last section 5 the empirical results are presented. We compare the estimated parameters for the unmarried women between the USA and the FRG and also analyse the differences between unmarried and married women. Moreover a comparison between different samples of unmarried women is provided.
Analysis of Lambda and associative pion production in relativistic nucleus-nucleus collisions
(1984)
This paper proposes the Shannon entropy as an appropriate one-dimensional measure of behavioural trading patterns in financial markets. The concept is applied to the illustrative example of algorithmic vs. non-algorithmic trading and empirical data from Deutsche Börse's electronic cash equity trading system, Xetra. The results reveal pronounced differences between algorithmic and non-algorithmic traders. In particular, trading patterns of algorithmic traders exhibit a medium degree of regularity while non-algorithmic trading tends towards either very regular or very irregular trading patterns. JEL Classification: C40, D0, G14, G15, G20
A feature of the Northern Iroquoian languages is their especially rich inventory of particles. This paper is concerned with one particle in the Cayuga language which has a widespread distribution and performs a broad range of apparently unrelated functions. The particle ne:' is commonly .translated as 'it is/that is', 'this' or ' that'. In other instances it is translated as predominant stress, or is simply omitted in the translation. The particle can occur in almost any syntactic or semantic environment, but it is not obligatory in any context. The various functions that have been suggested in the literature include indication of declarative mood and assertion, marking of emphasis, focus or contrast, and expression of predicative and deictic force. I argue that the particle ne:' can be described successfully if its distribution is considered from a wider perspective, taking into account discourse structure and variation in scope. Its analysis as a focus marker can account for the variety of apparently unrelated functions. The analysis is based on a detailed study of the particle' s distribution in spoken language using a database of five Cayuga texts by four different speakers, including three narratives, one procedural text and a children 's version of a ceremonial text.