Refine
Year of publication
Document Type
- Working Paper (2338) (remove)
Language
- English (2338) (remove)
Has Fulltext
- yes (2338) (remove)
Is part of the Bibliography
- no (2338)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (30)
- Sprachtypologie (23)
Institute
- Center for Financial Studies (CFS) (1366)
- Wirtschaftswissenschaften (1295)
- Sustainable Architecture for Finance in Europe (SAFE) (729)
- House of Finance (HoF) (598)
- Institute for Monetary and Financial Stability (IMFS) (170)
- Rechtswissenschaft (146)
- Informatik (114)
- Foundation of Law and Finance (50)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
This paper is the first to conduct an incentive-compatible experiment using real monetary payoffs to test the hypothesis of probabilistic insurance which states that willingness to pay for insurance decreases sharply in the presence of even small default probabilities as compared to a risk-free insurance contract. In our experiment, 181 participants state their willingness to pay for insurance contracts with different levels of default risk. We find that the willingness to pay sharply decreases with increasing default risk. Our results hence strongly support the hypothesis of probabilistic insurance. Furthermore, we study the impact of customer reaction to default risk on an insurer’s optimal solvency level using our experimentally obtained data on insurance demand. We show that an insurer should choose to be default-free rather than having even a very small default probability. This risk strategy is also optimal when assuming substantial transaction costs for risk management activities undertaken to achieve the maximum solvency level.
Die Hauptthese dieser Dissertation ist, dass Nord-Sotho keinen obligatorischen Gebrauch von grammatischen Mitteln zur Markierung von Fokus macht, weder in der Syntax noch in der Prosodie oder Morphologie. Trotzdem strukturiert diese Sprache eine Äußerung nach informationsstrukturellen Aspekten. Konstituenten, die im Diskurs gegeben sind, werden entweder getilgt, pronominalisiert oder an den rechten oder linken Satzrand versetzt. Diese (morpho-)syntaktischen Prozesse wirken so zusammen, dass die fokussierte Konstituente oft final in ihrem Teilsatz erscheint. Obwohl die finale Position keine designierte Fokusposition ist, ist das Wissen um diese Tendenz doch entscheidend für das Verständnis einer morphologischen Alternation, die in Nord-Sotho am Verb erscheint und die in der Literatur im Zusammenhang mit Fokus diskutiert wurde.
Obwohl also ein direkter grammatischer Ausdruck von formaler F(okus)-Markierung im Nord-Sotho fehlt, ist F-Markierung trotzdem entscheidend für die Grammatik dieser Sprache: Fokussierte logische Subjekte können nicht in kanonischer präverbaler Position erscheinen. Sie erscheinen stattdessen entweder postverbal oder in einem Spaltsatz, abhängig von der Valenz des Verbs. Obwohl Nord-Sotho bei Objekten im Gebrauch von Spaltsätzen eine Korrespondenz von komplexer Form mit komplexer Bedeutung zeigt, gilt diese Korrespondenz nicht für logische Subjekte.
Die vorliegende Dissertation modelliert die oben genannten Ergebnisse im theoretischen Rahmen der Optimalitätstheorie (OT). Syntaktischer in situ Fokus und die Abwesenheit von prosodischer Fokusmarkierung können mit unkontroversen Beschränkungen erfasst werden. Für die Ungrammatikaliät fokussierter logischer Subjekte in präverbaler Position schlägt die vorliegende Arbeit die Modifizierung einer in der Literatur vorhandenen Beschränkung vor, die in Nord-Sotho von entscheidener Bedeutung ist. Die Form-Bedeutungs-Korrespondenz wird, wie andere Phänomene pragmatischer Arbeitsteilung auch, innerhalb der schwach bidirektionalen Optimalitätstheorie behandelt.
Biodiversity loss poses a significant threat to the global economy and affects ecosystem services on which most large companies rely heavily. The severe financial implications of such a reduced species diversity have attracted the attention of companies and stakeholders, with numerous calls to increase corporate transparency. Using textual analysis, this study thus investigates the current state of voluntary biodiversity reporting of 359 European blue-chip companies and assesses the extent to which it aligns with the upcoming disclosure framework of the Task Force on Nature-related Financial Disclosures (TNFD). The descriptive results suggest a substantial gap between current reporting practices and the proposed TNFD framework, with disclosures largely lacking quantification, details and clear targets. In addition, the disclosures appear to be relatively unstandardized. Companies in sectors or regions exposed to higher nature-related risks as well as larger companies are more likely to report on aspects of biodiversity. This study contributes to the emerging literature on nature-related risks and provides detailed insights on the extent of the reporting gap in light of the upcoming standards.
To monitor one's speech means to check the speech plan for errors, both before and after talking. There are several theories as to how this process works. We give a short overview on the most influential theories only to focus on the most widely received one, the Perceptual Loop Theory of monitoring by Levelt (1983). One of the underlying assumptions of this theory is the existence of an Inner Loop, a monitoring device that checks for errors before speech is articulated. This paper collects evidence for the existence of such an internal monitoring device and questions how it might work. Levelt's theory argues that internal monitoring works by means of perception, but there are other empirical findings that allow for the assumption that an Inner Loop could also use our speech production devices. Based on data from both experimental and aphasiological papers we develop a model based on Levelt (1983) which shows that internal monitoring might in fact make use of both perception and production means.
With free delivery of products virtually being a standard in E-commerce, product returns pose a major challenge for online retailers and society. For retailers, product returns involve significant transportation, labor, disposal, and administrative costs. From a societal perspective, product returns contribute to greenhouse gas emissions and packaging disposal and are often a waste of natural resources. Therefore, reducing product returns has become a key challenge. This paper develops and validates a novel smart green nudging approach to tackle the problem of product returns during customers’ online shopping processes. We combine a green nudge with a novel data enrichment strategy and a modern causal machine learning method. We first run a large-scale randomized field experiment in the online shop of a German fashion retailer to test the efficacy of a novel green nudge. Subsequently, we fuse the data from about 50,000 customers with publicly-available aggregate data to create what we call enriched digital footprints and train a causal machine learning system capable of optimizing the administration of the green nudge. We report two main findings: First, our field study shows that the large-scale deployment of a simple, low-cost green nudge can significantly reduce product returns while increasing retailer profits. Second, we show how a causal machine learning system trained on the enriched digital footprint can amplify the effectiveness of the green nudge by “smartly” administering it only to certain types of customers. Overall, this paper demonstrates how combining a low-cost marketing instrument, a privacy-preserving data enrichment strategy, and a causal machine learning method can create a win-win situation from both an environmental and economic perspective by simultaneously reducing product returns and increasing retailers’ profits.
By focusing on the cost conditions at issuance, I find that not only the Covid-19 pandemic effects were different across bonds and firms at different stages, but also that the market composition was significantly affected, collapsing on investment- grade bonds, a segment in which the share of bonds eligible to the ECB corporate programmes strikingly increased from 15% to 40%. At the same time the high-yield segment shrunk to almost disappear at 4%. In addition to a market segmentation along the bond grade and the eligibility to the ECB programmes, another source of risk detected in the pricing mechanism is the weak resilience to pandemic: the premium requested is around 30 basis points and started to be priced only after the early containment actions taken by the national authorities. On the contrary, I do not find evidence supporting an increased risk for corporations headquartered in countries with a reduced fiscal space, nor the existence of a premium in favour of green bonds, which should be the backbone of a possible “green recovery”.
We assess the degree of market fragmentation in the euro-area corporate bond market by disentangling the determinants of the risk premium paid on bonds at origination. By looking at over 2,400 bonds we are able to isolate the country-specific effects which are a suitable indicator of the market fragmentation. We find that, after peaking during the sovereign debt crisis, fragmentation shrank in 2013 and receded to pre-crisis levels only in 2014. However, the low level of estimated market fragmentation is coupled with a still high heterogeneity in actual bond yields, challenging the consistency of the new equilibrium.
We analyze the risk premium on bank bonds at origination with a special focus on the role of implicit and explicit public guarantees and the systemic relevance of the issuing institutions. By looking at the asset swap spread on 5,500 bonds, we find that explicit guarantees and sovereign creditworthiness have a substantial effect on the risk premium. In addition, while large institutions still enjoy lower issuance costs linked to the TBTF framework, we find evidence of enhanced market disciple for systemically important banks which face, since the onset of the financial crisis, an increased premium on bond placements.
Unconventional green
(2023)
We analyze the effects of the PEPP (Pandemic Emergency Purchase Programme), the temporary quantitative easing implemented by the ECB immediately after the burst of the Covid-19 pandemic. We show that the differences in aim, size and flexibility with respect to the traditional Corporate Sector Purchase Programme (CSPP) were able to significantly involve, in addition to the directly targeted bonds, also the green bond segment. Via a standard difference- in-differences model we estimate that the yield on green bonds declined by more than 20 basis points after the PEPP. In order to take into account also the differences attributable to the eligibility to the programme, we employ a triple difference estimator. Bonds that at the same time were green and eligible benefitted of an additional premium of 39 basis points.
Chen and Zadrozny (1998) developed the linear extended Yule-Walker (XYW) method for determining the parameters of a vector autoregressive (VAR) model with available covariances of mixed-frequency observations on the variables of the model. If the parameters are determined uniquely for available population covariances, then, the VAR model is identified. The present paper extends the original XYW method to an extended XYW method for determining all ARMA parameters of a vector autoregressive moving-average (VARMA) model with available covariances of single- or mixed-frequency observations on the variables of the model. The paper proves that under conditions of stationarity, regularity, miniphaseness, controllability, observability, and diagonalizability on the parameters of the model, the parameters are determined uniquely with available population covariances of single- or mixed-frequency observations on the variables of the model, so that the VARMA model is identified with the single- or mixed-frequency covariances.
Linear rational-expectations models (LREMs) are conventionally "forwardly" estimated as follows. Structural coefficients are restricted by economic restrictions in terms of deep parameters. For given deep parameters, structural equations are solved for "rational-expectations solution" (RES) equations that determine endogenous variables. For given vector autoregressive (VAR) equations that determine exogenous variables, RES equations reduce to reduced-form VAR equations for endogenous variables with exogenous variables (VARX). The combined endogenous-VARX and exogenous-VAR equations comprise the reduced-form overall VAR (OVAR) equations of all variables in a LREM. The sequence of specified, solved, and combined equations defines a mapping from deep parameters to OVAR coefficients that is used to forwardly estimate a LREM in terms of deep parameters. Forwardly-estimated deep parameters determine forwardly-estimated RES equations that Lucas (1976) advocated for making policy predictions in his critique of policy predictions made with reduced-form equations.
Sims (1980) called economic identifying restrictions on deep parameters of forwardly-estimated LREMs "incredible", because he considered in-sample fits of forwardly-estimated OVAR equations inadequate and out-of-sample policy predictions of forwardly-estimated RES equations inaccurate. Sims (1980, 1986) instead advocated directly estimating OVAR equations restricted by statistical shrinkage restrictions and directly using the directly-estimated OVAR equations to make policy predictions. However, if assumed or predicted out-of-sample policy variables in directly-made policy predictions differ significantly from in-sample values, then, the out-of-sample policy predictions won't satisfy Lucas's critique.
If directly-estimated OVAR equations are reduced-form equations of underlying RES and LREM-structural equations, then, identification 2 derived in the paper can linearly "inversely" estimate the underlying RES equations from the directly-estimated OVAR equations and the inversely-estimated RES equations can be used to make policy predictions that satisfy Lucas's critique. If Sims considered directly-estimated OVAR equations to fit in-sample data adequately (credibly) and their inversely-estimated RES equations to make accurate (credible) out-of-sample policy predictions, then, he should consider the inversely-estimated RES equations to be credible. Thus, inversely-estimated RES equations by identification 2 can reconcile Lucas's advocacy for making policy predictions with RES equations and Sims's advocacy for directly estimating OVAR equations.
The paper also derives identification 1 of structural coefficients from RES coefficients that contributes mainly by showing that directly estimated reduced-form OVAR equations can have underlying LREM-structural equations.
Over the past few decades, changes in market conditions such as globalisation and deregulation of financial markets as well as product innovation and technical advancements have induced financial institutions to expand their business activities beyond their traditional boundaries and to engage in cross-sectoral operations. As combining different sectoral businesses offers opportunities for operational synergies and diversification benefits, financial groups comprising banks, insurance undertakings and/or investment firms, usually referred to as financial conglomerates, have rapidly emerged, providing a wide range of services and products in distinct financial sectors and oftentimes in different geographic locations. In the European Union (EU), financial conglomerates have become part of the biggest and most active financial market participants in recent years. Financial conglomerates generally pose new problems for financial authorities as they can raise new risks and exacerbate existing ones. In particular, their cross-sectoral business activities can involve prudentially substantial risks such as the risk of regulatory arbitrage and contagion risk arising from intra-group transactions. Moreover, the generally large size of financial conglomerates as well as the high complexity and interconnectedness of their corporate structures and risk exposures can entail substantial systemic risk and can therefore threaten the stability of the financial system as a whole. Until a few years ago, there was no supervisory framework in place which addressed a financial conglomerate in its entirety as a group. Instead, each group entity within a financial conglomerate was subject to the supervisory rules of its pertinent sector only. Such silo supervisory approach had the drawback of not taking account of risks which arise or aggravate at the group level. It also failed to consider how the risks from different business lines within the group interrelate with each other and affect the group as a whole. In order to address this lack of group-wide prudential supervision of financial conglomerates, the European legislator adopted the Financial Conglomerates Directive 2002/87/EC8 (‘FCD’) on 16 December 2002. The FCD was transposed into national law in the member states of the EU (‘Member States’) by 11 August 2004 for application to financial years beginning on 1 January 2005 and after. The FCD primarily aims at supplementing the existing sectoral directives to address the additional risks of concentration, contagion and complexity presented by financial conglomerates. It therefore provides for a supervisory framework which is applicable in addition to the sectoral supervision. Most importantly, the FCD has introduced additional capital requirements at the conglomerate level so as to prevent the multiple use of the same capital by different group entities. This paper seeks to examine to what extent the FCD provides for an adequate capital regulation of financial conglomerates in the EU while taking into account the underlying sectoral capital requirements and the inherent risks associated with financial conglomerates. In Part 1, the definition and the basic corporate models of financial conglomerates will be presented (I), followed by an illustration of the core motives behind the phenomenon of financial conglomeration (II) and an overview of the development of the supervision over financial conglomerates in the EU (III). Part 2 begins with a brief elaboration on the role of regulatory capital (I) and gives a general overview of the EU capital requirements applicable to banks and insurance undertakings respectively. A delineation of the commonalities and differences of the banking and the insurance capital requirements will be provided (II). It continues to further examine the need for a group-wide capital regulation of financial conglomerates and analyses the adequacy of the FCD capital requirements. In this context, the technical advice rendered by the Joint Committee on Financial Conglomerates (JCFC) as well as the currently ongoing legislative reforms at the EU level will be discussed (III). The paper finally closes with a conclusion and an outlook on remaining open issues (IV).
The financial services industry worldwide has undergone major transformation since the late 1970s. Technological advancements in information processing and communication facilitated financial innovation and narrowed traditional distinctions in financial products and services, allowing them to become close substitutes for one another. The deregulation process in many major economies prior to the recent financial crisis blurred the traditional lines of demarcation between the distinct types of financial institutions, exposing those firms to new competitors in their traditional business areas, while the increasing globalization of financial markets fostered the provision of financial services across national borders. Against this backdrop, a trend toward consolidation across financial sectors as well as across national borders increasingly manifested itself since the 1990s. The developments in the financial markets ever more intensified competition in the financial services industry and induced financial institutions to redefine their business strategies in search of higher profitability and growth opportunities. Consolidation across distinct financial sectors, i.e. financial conglomeration, in particular became a popular business strategy in light of the potential operational synergies and diversification benefits it can offer. This trend spurred the growth of diversified financial groups, the so-called financial conglomerates, which commingle banking, securities, and insurance activities under one corporate umbrella.5 Still today, large, complex financial conglomerates are represented among major players in the financial markets worldwide, whose activities not only sway across traditional boundaries of banking, securities, and insurance sectors but also across national borders.
Notwithstanding the economic benefits that conglomeration may produce as a business strategy, the emergence of financial conglomerates also exacerbated existing and created new prudential risks in the financial system. 6 The mixing of a variety of financial products and services under one corporate roof and the generally large and complex group structure of financial conglomerates expose such organizations to specific group risks such as contagion and arbitrage risk as well as systemic risk. When realized, these risks may not only cause the failure of an entire financial group but threaten the stability of the financial system as a whole, as evidenced by the events during recent financial crisis of 2007-2009...
I propose a dynamic stochastic general equilibrium model in which the leverage of borrowers as well as banks and housing finance play a crucial role in the model dynamics. The model is used to evaluate the relative effectiveness of a policy to inject capital into banks versus a policy to relieve households of mortgage debt. In normal times, when the economy is near the steady state and policy rates are set according to a Taylor-type rule, capital injections to banks are more effective in stimulating the economy in the long-run. However, in the middle of a housing debt crisis, when households are highly leveraged, the short-run output effects of the debt relief are more substantial. When the zero lower bound (ZLB) is additionally considered, the debt relief policy can be much more powerful in boosting the economy both in the short-run and in the longrun. Moreover, the output effects of the debt relief become increasingly larger, the longer the ZLB is binding.
The first part of the following paper deals with varying points of criticism forwarded against Ordoliberalism. Here, it is not the aim to directly falsify each argument on its own; rather, the author tries to give a precise overview of the spectrum of critique. The second section picks out one argument of critical review – namely that the ordoliberal concept of the state is somewhat elitist and grounded on intellectual experts. Based on the previous sections, the final part differentiates two kinds of genesis of norms: an evolutionary and an elitist one – both (latently) present within Ordoliberalism. In combination with the two-level differentiation between individual and regulatory ethics, the essay allows for a distinction between individual-ethical norms based on an evolutionary genesis of norms and regulatory-ethical norms based on an elitist understanding of norms. A by-product of the author’s argument is a (further) demarcation within neoliberalism.
Based on Foucault’s analysis of German Neoliberalism and his thesis of ambiguity, the following paper draws a two-level distinction between individual and regulatory ethics. The individual ethics level – which has received surprisingly little attention – contains the Christian foundation of values and the liberal-Kantian heritage of so called Ordoliberalism – as one variety of neoliberalism. The regulatory or formal-institutional ethics level on the contrary refers to the ordoliberal framework of a socio-economic order. By differentiating these two levels of ethics incorporated in German Neoliberalism, it is feasible to distinguish dissimilar varieties of neoliberalism and to link Ordoliberalism to modern economic ethics. Furthermore, it allows a revision of the dominant reception of Ordoliberalism which focuses solely on the formal-institutional level while mainly neglecting the individual ethics level.
June 4th, 2013 marks the formal launch of the third generation of the Equator Principles (EP III) and the tenth anniversary of the EPs – enough reasons for evaluating the EPs initiative from an economic ethics and business ethics perspectives. In particular, this essay deals with the following questions: What are the EPs and where are they going? What has been achieved so far by the EPs? What are the strengths and weaknesses of the EPs? Which necessary reform steps need to be adopted in order to further strengthen the EPs framework? Can the EPs be regarded as a role-model in the field of sustainable finance and CSR? The paper is structured as follows: The first chapter defines the term EPs and introduces the keywords related to the EPs framework. The second chapter gives a brief overview of the history of the EPs. The third chapter discusses the Equator Principles Association, the governing, administering, and managing institution behind the EPs. The fourth chapter summarizes the main features and characteristics of the newly released third generation of the EPs. The fifth chapter critically evaluates the EP III from an economic ethics and business ethics perspectives. The paper concludes with a summary of the main findings.
This paper analyzes liquidity in an order driven market. We only investigate the best limits in the limit order book, but also take into account the book behind these inside prices. When subsequent prices are close to the best ones and depth at them is substantial, larger orders can be executed without an extensive price impact and without deterring liquidity. We develop and estimate several econometric models, based on depth and prices in the book, as well as on the slopes of the limit order book. The dynamics of different dimensions of liquidity are analyzed: prices, depth at and beyond the best prices, as well as resiliency, i.e. how fast the different liquidity measures recover after a liquidity shock. Our results show a somewhat less favorable image of liquidity than often found in the literature. After a liquidity shock (in the spread or depth or in the book beyond the best limits), several dimension of liquidity deteriorate at the same time. Not only does the inside spread increase, and depth at the best prices decrease, also the difference between subsequent bid and ask prices may become larger and depth provided at them decreases. The impacts are both econometrically and economically significant. Also, our findings point to an interaction between different measures of liquidity, between liquidity at the best prices and beyond in the book, and between ask and bid side of the market.
Venture capital (VC) investment has long been conceptualized as a local business , in which the VC’s ability to source, syndicate, fund, monitor, and add value to portfolio firms critically depends on their access to knowledge obtained through their ties to the local (i.e., geographically proximate) network. Consistent with the view that local networks matter, existing research confirms that local and geographically distant portfolio firms are sourced, syndicated, funded, and monitored differently. Curiously, emerging research on VC investment practice within the United States finds that distant investments, as measured by “exits” (either initial public offering or merger & acquisition) out-perform local investments. These findings raise important questions about the assumed benefits of local network membership and proximity. To more deeply probe these questions, we contrast the deal structure of cross-border VC investment with domestic VC investment, and contrast the deal structure of cross-border VC investments that include a local
partner with those that do not. Evidence from 139,892 rounds of venture capital financing in the period 1980-2009 suggests that cross-border investment practice, in terms of deal sourcing, syndication, and performance indeed change with proximity, but that monitoring practices do not. Further, we find that the inclusion of a local partner in the investment syndicate yields surprisingly few benefits. This evidence, we argue, raises important questions about VC investment practice as well as the ability of firms to capture and lever the presumed benefits of network membership.
We examine the dynamics of assets under management (AUM) and management fees at the portfolio manager level in the closed-end fund industry. We find that managers capitalize on good past performance and favorable investor perception about future performance, as reflected in fund premiums, through AUM expansions and fee increases. However, the penalties for poor performance or unfavorable investor perception are either insignificant, or substantially mitigated by manager tenure. Long tenure is generally associated with poor performance and high discounts. Our findings suggest substantial managerial power in capturing CEF rents. We also document significant diseconomies of scale at the manager level.
This paper considers the desirability of the observed tendency of central banks to adjust interest rates only gradually in response to changes in economic conditions. It shows, in the context of a simple model of optimizing private-sector behavior, that such inertial behavior on the part of the central bank may indeed be optimal, in the sense of minimizing a loss function that penalizes inflation variations, deviations of output from potential, and interest-rate variability. Sluggish adjustment characterizes an optimal policy commitment, even though no such inertia would be present in the case of a reputationless (Markovian) equilibrium under discretion. Optimal interest-rate feedback rules are also characterized, and shown to involve substantial positive coefficients on lagged interest rates. This provides a theoretical explanation for the numerical results obtained by Rotemberg and Woodford (1998) in their quantitative model of the U.S. economy.
The paper considers optimal monetary stabilization policy in a forward-looking model, when the central bank recognizes that private-sector expectations need not be precisely model-consistent, and wishes to choose a policy that will be as good as possible in the case of any beliefs that are close enough to model-consistency. It is found that commitment continues to be important for optimal policy, that the optimal long-run inflation target is unaffected by the degree of potential distortion of beliefs, and that optimal policy is even more history-dependent than if rational expectations are assumed. JEL Classification: E52, E58, E42
This paper investigates the accuracy of point and density forecasts of four DSGE models for inflation, output growth and the federal funds rate. Model parameters are estimated and forecasts are derived successively from historical U.S. data vintages synchronized with the Fed’s Greenbook projections. Point forecasts of some models are of similar accuracy as the forecasts of nonstructural large dataset methods. Despite their common underlying New Keynesian modeling philosophy, forecasts of different DSGE models turn out to be quite distinct. Weighted forecasts are more precise than forecasts from individual models. The accuracy of a simple average of DSGE model forecasts is comparable to Greenbook projections for medium term horizons. Comparing density forecasts of DSGE models with the actual distribution of observations shows that the models overestimate uncertainty around point forecasts.
The paper illustrates based on an example the importance of consistency between the empirical measurement and the concept of variables in estimated macroeconomic models. Since standard New Keynesian models do not account for demographic trends and sectoral shifts, the authors proposes adjusting hours worked per capita used to estimate such models accordingly to enhance the consistency between the data and the model. Without this adjustment, low frequency shifts in hours lead to unreasonable trends in the output gap, caused by the close link between hours and the output gap in such models.
The retirement wave of baby boomers, for example, lowers U.S. aggregate hours per capita, which leads to erroneous permanently negative output gap estimates following the Great Recession. After correcting hours for changes in the age composition, the estimated output gap closes gradually instead following the years after the Great Recession.
This paper investigates the accuracy of forecasts from four DSGE models for inflation, output growth and the federal funds rate using a real-time dataset synchronized with the Fed’s Greenbook projections. Conditioning the model forecasts on the Greenbook nowcasts leads to forecasts that are as accurate as the Greenbook projections for output growth and the federal funds rate. Only for inflation the model forecasts are dominated by the Greenbook projections. A comparison with forecasts from Bayesian VARs shows that the economic structure of the DSGE models which is useful for the interpretation of forecasts does not lower the accuracy of forecasts. Combining forecasts of several DSGE models increases precision in comparison to individual model forecasts. Comparing density forecasts with the actual distribution of observations shows that DSGE models overestimate uncertainty around point forecasts.
Large companies are increasingly on trial. Over the last decade, many of the world’s biggest firms have been embroiled in legal disputes over corruption charges, financial fraud, environmental damage, taxation issues or sanction violations, ending in convictions or settlements of record-breaking fines, well above the billion-dollar mark. For critics of globalization, this turn towards corporate accountability is a welcome sea-change showing that multinational companies are no longer above the law. For legal experts, the trend is noteworthy because of the extraterritorial dimensions of law enforcement, as companies are increasingly held accountable for activities independent of their nationality or the place of the activities. Indeed, the global trend required understanding the evolution of corporate criminal law enforcement in the United States in particular, where authorities have skillfully expanded its effective jurisdiction beyond its territory. This paper traces the evolution of corporate prosecutions in the United States. Analyzing federal prosecution data, it then shows that foreign firms are more likely to pay a fine, which is on average 6,6 times larger.
One of the motivations for establishing a European banking union was the desire to break the ties with between national regulators and domestic financial institutions in order to prevent regulatory capture. However, supervisory authority over the financial sector at the national level can also have valuable public benefits. The aim of this policy letter is to detail these public benefits in order to counter discussions that focus only on conflicts of interest. It is informed by an analysis of how financial institutions interacted with policy-makers in the design of national bank rescue schemes in response to the banking crisis of 2008. Using this information, it discusses the possible benefits of close cooperation between financial institutions and regulators and analyzes these in the wake of a European banking union.
Over the last three decades, countries across the Andean region have moved toward legal recognition of indigenous justice systems. This turn toward legal pluralism, however, has been and continues to be heavily contested. The working paper explores a theoretical perspective that aims at analyzing and making sense of this contentious process by assessing the interplay between conflict and (mis)trust. Based on a review of the existing scholarship on legal pluralism and indigenous justice in the Andean region, with a particular focus on the cases of Bolivia and Ecuador, it is argued that manifest conflict over the contested recognition of indigenous justice can be considered as helpful and even necessary for the deconstruction of mistrust of indigenous justice. Still, such conflict can also help reproduce and even reinforce mistrust, depending on the ways in which conflict is dealt with politically and socially. The exploratory paper suggests four proposition that specify the complex and contingent relationship between conflict and (mis)trust in the contested negotiation of pluralist justice systems in the Andean region.
This paper studies the long-run effects of credit market disruptions on real firm outcomes and how these effects depend on nominal wage rigidities at the firm level. I trace out the long-run investment and growth trajectories of firms which are more adversely affected by a transitory shock to aggregate credit supply. Affected firms exhibit a temporary investment gap for two years following the shock, resulting in a persistent accumulated growth gap. I show that affected firms with a higher degree of wage rigidity exhibit a steeper drop in investment and grow more slowly than affected firms with more flexible wages.
During the last years the relationship between financial development and economic growth has received widespread attention in the literature on growth and development. This paper summarises in its first part the results of this research, stressing the growth-enhancing effects of an increased interpersonal re-allocation of resources promoted by financial development. The second part of the paper seeks to identify the determinants of financial development based on Diamond's theory of financial intermediation as delegated monitoring. The analysis shows that the quality of corporate governance of banks is the key factor in financial system development. Accordingly, financial sector reforms in developing countries will only succeed if they strengthen the corporate governance of financial institutions. In this area, financial institution building has an important contribution to make. Paper presented at the First Annual Seminar on New Development Finance held at the Goethe University of Frankfurt, September 22 - October 3, 1997
In this paper we test previous claims concerning the universality of patterns of polysemy and semantic change in perception verbs. Implicit in such claims are two elements: firstly, that the sharing of two related senses A and B by a given form is cross-linguistically widespread, and matched by a complementary lack of some rival polysemy, and secondly that the explanation for the ubiquity of a given pattern of polysemy is ultimately rooted in our shared human cognitive make-up. However, in comparison to the vigorous testing of claimed universals that has occurred in phonology, syntax and even basic lexical meaning, there has been little attempt to test proposed universals of semantic extension against a detailed areal study of non-European languages. To address this problem we examine a broad range of Australian languages to evaluate two hypothesized universals: one by Viberg (1984), concerning patterns of semantic extension across sensory modalities within the domain of perception verbs (i .e. intra-field extensions), and the other by Sweetser (1990), concerning the mapping of perception to cognition (i.e. trans-field extensions). Testing against the Australian data allows one claimed universal to survive, but demolishes the other, even though both assign primacy to vision among the senses.
The speakers of the Paraná dialect of Kaingáng, from whom the data of this study were gathered, have lived in close contact with the Brazilians since before the turn of the century. Although many members of this group are still monolingual and Kaingáng is spoken in all the homes, the influence of Portuguese is making an impact on the language. This can be seen not only in isolated loan words, but it is slowly changing the time dimension of the language and the thinking of the Indians. The change seems to have come about first through loan words, but it is now also affecting the semantic structure of the language and is beginning to affect the grammatical structure as well. The study here presented deals with this change as it can be seen in relation to time expressions such as yesterday – today – tomorrow; units of time such as day – month – year; kinship terms; and finally aspect particles. In considering the time expressions the meaning of various paradigms will be discussed. The paradigms are related to the time when events took place, to sequence of events, and to the point of the action. No Brazilian influence can be observed here. In the discussion of the units of time the semantic area of these units before and after Brazilian influence will be explored. Through Brazilian influence vocabulary has been developed with which it is possible to accurately pinpoint events in time which was not possible before this. The time distinctions within the kinship system will be discussed, and how they change with the influence of Brazilian terms. A whole new generation distinction is added in the modified kinship system. Similary several new aspect particles are being created through contractions, which now contain a time element. The whole development shows an emphasis on fine distinctions in time depth which came about through the contact with Portuguese and which can be observed in several points of the structure of Kaingáng.
This paper investigates the accuracy and heterogeneity of output growth and inflation forecasts during the current and the four preceding NBER-dated U.S. recessions. We generate forecasts from six different models of the U.S. economy and compare them to professional forecasts from the Federal Reserve’s Greenbook and the Survey of Professional Forecasters (SPF). The model parameters and model forecasts are derived from historical data vintages so as to ensure comparability to historical forecasts by professionals. The mean model forecast comes surprisingly close to the mean SPF and Greenbook forecasts in terms of accuracy even though the models only make use of a small number of data series. Model forecasts compare particularly well to professional forecasts at a horizon of three to four quarters and during recoveries. The extent of forecast heterogeneity is similar for model and professional forecasts but varies substantially over time. Thus, forecast heterogeneity constitutes a potentially important source of economic fluctuations. While the particular reasons for diversity in professional forecasts are not observable, the diversity in model forecasts can be traced to different modeling assumptions, information sets and parameter estimates. JEL Classification: C53, D84, E31, E32, E37 Keywords: Forecasting, Business Cycles, Heterogeneous Beliefs, Forecast Distribution, Model Uncertainty, Bayesian Estimation
The recent decline in euro area inflation has triggered new calls for additional monetary stimulus by the ECB in order to counter the threat of a self‐reinforcing deflation and recession spiral. This note reviews the available evidence on inflation expectations, output gaps and other factors driving current inflation through the lens of the Phillips curve. It also draws a comparison to the Japanese experience with deflation in the late 1990s and the evidence from Japan concerning the outputinflation nexus at low trend inflation. The note concludes from this evidence that the risk of a selfreinforcing deflation remains very small. Thus, the ECB best await the impact of the long‐term refinancing operations decided in June that have the potential to induce substantial monetary accommodation once implemented for the first time in September.
In the aftermath of the global financial crisis, the state of macroeconomicmodeling and the use of macroeconomic models in policy analysis has come under heavy criticism. Macroeconomists in academia and policy institutions have been blamed for relying too much on a particular class of macroeconomic models. This paper proposes a comparative approach to macroeconomic policy analysis that is open to competing modeling paradigms. Macroeconomic model comparison projects have helped produce some very influential insights such as the Taylor rule. However, they have been infrequent and costly, because they require the input of many teams of researchers and multiple meetings to obtain a limited set of comparative findings. This paper provides a new approach that enables individual researchers to conduct model comparisons easily, frequently, at low cost and on a large scale. Using this approach a model archive is built that includes many well-known empirically estimated models that may be used for quantitative analysis of monetary and fiscal stabilization policies. A computational platform is created that allows straightforward comparisons of models’ implications. Its application is illustrated by comparing different monetary and fiscal policies across selected models. Researchers can easily include new models in the data base and compare the effects of novel extensions to established benchmarks thereby fostering a comparative instead of insular approach to model development
In the aftermath of the global financial crisis, the state of macroeconomic modeling and the use of macroeconomic models in policy analysis has come under heavy criticism. Macroeconomists in academia and policy institutions have been blamed for relying too much on a particular class of macroeconomic models. This paper proposes a comparative approach to macroeconomic policy analysis that is open to competing modeling paradigms. Macroeconomic model comparison projects have helped produce some very influential insights such as the Taylor rule. However, they have been infrequent and costly, because they require the input of many teams of researchers and multiple meetings to obtain a limited set of comparative findings. This paper provides a new approach that enables individual researchers to conduct model comparisons easily, frequently, at low cost and on a large scale. Using this approach a model archive is built that includes many well-known empirically estimated models that may be used for quantitative analysis of monetary and fiscal stabilization policies. A computational platform is created that allows straightforward comparisons of models’ implications. Its application is illustrated by comparing different monetary and fiscal policies across selected models. Researchers can easily include new models in the data base and compare the effects of novel extensions to established benchmarks thereby fostering a comparative instead of insular approach to model development.
The global financial crisis and the ensuing criticism of macroeconomics have inspired researchers to explore new modeling approaches. There are many new models that deliver improved estimates of the transmission of macroeconomic policies and aim to better integrate the financial sector in business cycle analysis. Policy making institutions need to compare available models of policy transmission and evaluate the impact and interaction of policy instruments in order to design effective policy strategies. This paper reviews the literature on model comparison and presents a new approach for comparative analysis. Its computational implementation enables individual researchers to conduct systematic model comparisons and policy evaluations easily and at low cost. This approach also contributes to improving reproducibility of computational research in macroeconomic modeling. Several applications serve to illustrate the usefulness of model comparison and the new tools in the area of monetary and fiscal policy. They include an analysis of the impact of parameter shifts on the effects of fiscal policy, a comparison of monetary policy transmission across model generations and a cross-country comparison of the impact of changes in central bank rates in the United States and the euro area. Furthermore, the paper includes a large-scale comparison of the dynamics and policy implications of different macro-financial models. The models considered account for financial accelerator effects in investment financing, credit and house price booms and a role for bank capital. A final exercise illustrates how these models can be used to assess the benefits of leaning against credit growth in monetary policy.
This paper reviews the rationale for quantitative easing when central bank policy rates reach near zero levels in light of recent announcements regarding direct asset purchases by the Bank of England, the Bank of Japan, the U.S. Federal Reserve and the European Central Bank. Empirical evidence from the previous period of quantitative easing in Japan between 2001 and 2006 is presented. During this earlier period the Bank of Japan was able to expand the monetary base very quickly and significantly. Quantitative easing translated into a greater and more lasting expansion of M1 relative to nominal GDP. Deflation subsided by 2005. As soon as inflation appeared to stabilize near a rate of zero, the Bank of Japan rapidly reduced the monetary base as a share of nominal income as it had announced in 2001. The Bank was able to exit from extensive quantitative easing within less than a year. Some implications for the current situation in Europe and the United States are discussed.
Recent evaluations of the fiscal stimulus packages recently enacted in the United States and Europe such as Cogan, Cwik, Taylor and Wieland (2009) and Cwik and Wieland (2009) suggest that the GDP effects will be modest due to crowding-out of private consumption and investment. Corsetti, Meier and Mueller (2009a,b) argue that spending shocks are typically followed by consolidations with substantive spending cuts, which enhance the short-run stimulus effect. This note investigates the implications of this argument for the estimated impact of recent stimulus packages and the case for discretionary fiscal policy.
This paper introduces adaptive learning and endogenous indexation in the New-Keynesian Phillips curve and studies disinflation under inflation targeting policies. The analysis is motivated by the disinflation performance of many inflation-targeting countries, in particular the gradual Chilean disinflation with temporary annual targets. At the start of the disinflation episode price-setting firms’ expect inflation to be highly persistent and opt for backward-looking indexation. As the central bank acts to bring inflation under control, price-setting firms revise their estimates of the degree of persistence. Such adaptive learning lowers the cost of disinflation. This reduction can be exploited by a gradual approach to disinflation. Firms that choose the rate for indexation also re-assess the likelihood that announced inflation targets determine steady-state inflation and adjust indexation of contracts accordingly. A strategy of announcing and pursuing short-term targets for inflation is found to influence the likelihood that firms switch from backward-looking indexation to the central bank’s targets. As firms abandon backward-looking indexation the costs of disinflation decline further. We show that an inflation targeting strategy that employs temporary targets can benefit from lower disinflation costs due to the reduction in backward-looking indexation.
Inflation-targeting central banks have only imperfect knowledge about the effect of policy decisions on inflation. An important source of uncertainty is the relationship between inflation and unemployment. This paper studies the optimal monetary policy in the presence of uncertainty about the natural unemployment rate, the short-run inflation-unemployment tradeoff and the degree of inflation persistence in a simple macroeconomic model, which incorporates rational learning by the central bank as well as private sector agents. Two conflicting motives drive the optimal policy. In the static version of the model, uncertainty provides a motive for the policymaker to move more cautiously than she would if she knew the true parameters. In the dynamic version, uncertainty also motivates an element of experimentation in policy. I find that the optimal policy that balances the cautionary and activist motives typically exhibits gradualism, that is, it still remains less aggressive than a policy that disregards parameter uncertainty. Exceptions occur when uncertainty is very high and in inflation close to target.
This note argues that the European Central Bank should adjust its strategy in order to consider broader measures of inflation in its policy deliberations and communications. In particular, it points out that a broad measure of domestic goods and services price inflation such as the GDP deflator has increased along with the euro area recovery and the expansion of monetary policy since 2013, while HICP inflation has become more variable and, on average, has declined. Similarly, the cost of owner-occupied housing, which is excluded from the HICP, has risen during this period. Furthermore, it shows that optimal monetary policy at the effective lower bound on nominal interest rates aims to return inflation more slowly to the inflation target from below than in normal times because of uncertainty about the effects and potential side effects of quantitative easing.
While record-making prices at art auctions receive headline news coverage, artists typically do not receive any direct proceeds from those sales. Early-stage creative work in any field is perennially difficult to value, but the valuation, reward, and incentivization for artistic labor are particularly fraught. A core challenge in studying the real return on artists’ work is the extreme difficulty accessing data from when an artwork was first sold. Galleries keep private records that are difficult to access and to match to public auction results. This paper, for the first time, uses archivally sourced primary market records, for the artists Jasper Johns and Robert Rauschenberg. Although this approach restricts the size of the data set, this innovative method shows much more accurate returns on art than typical regression and hedonic models. We find that if Johns and Rauschenberg had retained 10% equity in their work when it was first sold, the returns to them when the work was resold at auction would have outperformed the US S&P 500 by between 2 and 986 times. The implication of this work opens up vast policy recommendations with regard to secondary art market sales, entrepreneurial strategies using blockchain technology, and implications about how we compensate creative work.
Employing the art-collection records of Burton and Emily Hall Tremaine, we consider whether early-stage art investors can be understood as venture capitalists. Because the Tremaines bought artists’ work very close to an artwork’s creation, with 69% of works in our study purchased within one year of the year when they were made, their collecting practice can best be framed as venture-capital investment in art. The Tremaines also illustrate art collecting as social-impact investment, owing to their combined strategy of art sales and museum donations for which the collectors received a tax credit under US rules. Because the Tremaines’ museum donations took place at a time that U.S. marginal tax rates from 70% to 91%, the near “donation parity” with markets, creating a parallel to ESG investment in the management of multiple forms of value.
With the Council regulation (EC) No. 1346/2000 of 29 May 2000 on insolvency proceedings, that came into effect May 31, 2002 the European Union has introduced a legal framework for dealing with cross-border insolvency proceedings. In order to achieve the aim of improving the efficiency and effectiveness of insolvency proceedings having cross-border effects within the European Community, the provisions on jurisdiction, recognition and applicable law in this area are contained in a Regulation, a Community law measure which is binding and directly applicable in Member States. The goals of the Regulation, with 47 articles, are to enable cross-border insolvency proceedings to operate efficiently and effectively, to provide for co-ordination of the measures to be taken with regard to the debtor’s assets and to avoid forum shopping. The Insolvency Regulation, therefore, provides rules for the international jurisdiction of a court in a Member State for the opening of insolvency proceedings, the (automatic) recognition of these proceedings in other Member States and the powers of the ‘liquidator’ in the other Member States. The Regulation also deals with important choice of law (or: private international law) provisions. The Regulation is directly applicable in the Member States3 for all insolvency proceedings opened after 31 May 2002.
A version of this paper was originally written for a plenary session about "The Futures of Ethnography" at the 1998 EASA conference in Frankfurt/Main. In the preparation of the paper, I sent out some questions to my former fellow researchers by e-mail. I thank Douglas Anthony, Jan-Patrick Heiß, Alaine Hutson, Matthias Krings, and Brian Larkin for their answers.
Namibia is known to be the most arid country south of the Sahara. Average annual rainfall is not only relatively low in most parts of the country, it is also highly variable. Only 8 per cent of the country receives enough rain during a normal rainy season to practice rainfed cultivation. At the same time between 60 per cent and 70 per cent of the population depend on subsistence agro-pastoralism in non-freehold or communal areas. Against the background of rising unemployment, the livelihoods of the majority of these people are likely to depend on natural resources in the foreseeable future.
Natural resources generally are under considerable strain. As the rural population increases, so is the demand for natural resources, land and water specifically. Dependency on subsistence farming which is the result of large scale rural poverty exacerbates the problem. Large parts of the country are stocked injudiciously, resulting in overgrazing and water is frequently overabstracted, leading to declining water tables (MET 2005: 2).
Unequal access to both land and water has prompted government to introduce reforms in these sectors. These reforms were guided by the desire to manage resources more sustainably while providing more equal access to them. In terms of NDP 2, sustainability means to use natural resources in such a way so as not to ‘compromise the ability of future generations to make use of these resources’ (NDP 2: 595).
Immediately after Independence government started reform processes in the land and water sectors. However, these reforms have happened at different paces and largely independent of each other. Increasingly policy makers and development practitioners realised that land and water management needed to be integrated, as decisions about land management and land use options had a direct impact on water resources. Conversely the availability of water sets the parameters for what is possible in terms of agricultural production and other land uses. The north-central regions face a particular challenge in this regard as the region carries more livestock than it can sustain in the long run. At the same time, close to half the households do not own any livestock. Access to livestock by these households would improve their abilities to cultivate their land more efficiently in order to feed themselves and thus reduce poverty levels.
But livestock are a major consumer of water. In 2000 livestock was consuming more water than the domestic sector. The figures were 77Mm3/a and 67Mm3/a respectively (Urban et al. 2003 Annex 7: 2). This situation has prompted a Project Progress Report on the Namibia Water Resources Management Review in 2003 to conclude that Given the extreme water scarcity in most parts of the country, land and water issues are closely linked. It therefore seems indispensable to mutually adjust land – and water sector reform processes (Ibid: 20).
This paper will briefly look at four institutions that are central to land and water management with a view to assess the extent to which they interact. These are Communal Land Boards, Water Point Committees, Traditional Authorities and Regional Councils. A discussion of relevant policy documents and legislative instruments will investigate whether the existing policy framework
provides for an integrated approach or not. Before doing this, it appears sensible to briefly situate these four institutions in the wider maze of institutions operating at regional and
sub-regional level. All these institutions – important as they are in the quest to improve participation at the regional and sub-regional level – are competing for time and input fros mallscale farmers.
The unintended consequences of the debt ... will increased government expenditure hurt the economy?
(2011)
In 2008, governments in many countries embarked on large fiscal expenditure programmes, with the intention to support the economy and prevent a more serious recession. In this study, the overall impact of a substantial increase in fiscal expenditure is considered by providing a novel analysis of the most relevant recent experience in similar circumstances, namely that of Japan in the 1990s. Then a weak economy with risk-averse banks seemed to require some of the largest peacetime fiscal stimulation programmes on record, albeit with disappointing results. The explanations provided by the literature and their unsatisfactory empirical record are reviewed. An alternative explanation, derived from early Keynesian models on the ineffectiveness of fiscal policy is presented in the form of a modified Fisher-equation, which incorporates the recent findings in the credit view literature. The model postulates complete quantity crowding out. It is subjected to empirical tests, which were supportive. Thus evidence is found that fiscal policy, if not supported by suitable monetary policy, is likely to crowd out private sector demand, even in an environment of falling or near-zero interest rates. As a policy conclusion it is pointed out that by changing the funding strategy, complete crowding out can be avoided and a positive net effect produced. The proposed framework creates common ground between proponents of Keynesian views (as held, among others, by Blinder and Solow), monetarist views (as held in particular by Milton Friedman) and those of leading contemporary macroeconomists (such as Mankiw).
During the past decade, processes associated with what is popularly though perhaps misleadingly known as globalization have come within the purview of anthropology. Migration and mobility ‐ and the footloose or even rootless social groups that they produce ‐ as well as the worldwide diffusion of commodities, media images, political ideas and practices, technologies and scientific knowledge today are on anthropology's research agenda. As a consequence, received notions about the ways in which culture relates to territory have been abandoned. The term transnationalisation captures cultural processes that stream across the borders of nation states. Anthropologists have been forced to revise the notion that transnationalisation would inevitably bring about a culturally homogenized world. Instead, we are witnessing a surge of greatly increasing cultural diversity. New cultural forms grow out of historically situated articulations of the local and the global. Rather than left-over relics from traditional orders, these are decidedly modern, yet far from uniform. The essay engages the idea of the pluralization of modernities, explores its potential for interdisciplinary research agendas, and also inquires into problematic assumptions underlying this new theoretical concept.
The modern tontine: an innovative instrument for longevity risk management in an aging society
(2016)
The changing social, financial and regulatory frameworks, such as an increasingly aging society, the current low interest rate environment, as well as the implementation of Solvency II, lead to the search for new product forms for private pension provision. In order to address the various issues, these product forms should reduce or avoid investment guarantees and risks stemming from longevity, still provide reliable insurance benefits and simultaneously take account of the increasing financial resources required for very high ages. In this context, we examine whether a historical concept of insurance, the tontine, entails enough innovative potential to extend and improve the prevailing privately funded pension solutions in a modern way. The tontine basically generates an age-increasing cash flow, which can help to match the increasing financing needs at old ages. However, the tontine generates volatile cash flows, so that - especially in the context of an aging society - the insurance character of the tontine cannot be guaranteed in every situation. We show that partial tontinization of retirement wealth can serve as a reliable supplement to existing pension products.
A tontine provides a mortality driven, age-increasing payout structure through the pooling of mortality. Because a tontine does not entail any guarantees, the payout structure of a tontine is determined by the pooling of individual characteristics of tontinists. Therefore, the surrender decision of single tontinists directly affects the remaining members' payouts. Nevertheless, the opportunity to surrender is crucial to the success of a tontine from a regulatory as well as a policyholder perspective. Therefore, this paper derives the fair surrender value of a tontine, first on the basis of expected values, and then incorporates the increasing payout volatility to determine an equitable surrender value. Results show that the surrender decision requires a discount on the fair surrender value as security for the remaining members. The discount intensifies in decreasing tontine size and increasing risk aversion. However, tontinists are less willing to surrender for decreasing tontine size and increasing risk aversion, creating a natural protection against tontine runs stemming from short-term liquidity shocks. Furthermore we argue that a surrender decision based on private information requires a discount on the fair surrender value as well.
FIFO is the most prominent queueing strategy due to its simplicity and the fact that it only works with local information. Its analysis within the adversarial queueing theory however has shown, that there are networks that are not stable under the FIFO protocol, even at arbitrarily low rate. On the other hand there are networks that are universally stable, i.e., they are stable under every greedy protocol at any rate r < 1. The question as to which networks are stable under the FIFO protocol arises naturally. We offer the first polynomial time algorithm for deciding FIFO stability and simple-path FIFO stability of a directed network, answering an open question posed in [1, 4]. It turns out, that there are networks, that are FIFO stable but not universally stable, hence FIFO is not a worst case protocol in this sense. Our characterization of FIFO stability is constructive and disproves an open characterization in [4].
Central banks have faced a succession of crises over the past years as well as a number of structural factors such as a transition to a greener economy, demographic developments, digitalisation and possibly increased onshoring. These suggest that the future inflation environment will be different from the one we know. Thus uncertainty about important macroeconomic variables and, in particular, inflation dynamics will likely remain high.
The paper uses fiscal reaction functions for a panel of euro-area countries to investigate whether euro membership has reduced the responsiveness of countries to shocks in the level of inherited debt compared to the period prior to succession to the euro. While we find some evidence for such a loss in prudence, the results are not robust to changes in the specification, such as an exclusion of Greece from the panel. This suggests that the current debt problems may result to a large extent from preexisting debt levels prior to entry or from a larger need for fiscal prudence in a common currency, while an adverse change in the fiscal reaction functions for most countries does not apply.
The pressure on tax haven countries to engage in tax information exchange shows first effects on capital markets. Empirical research suggests that investors do react to information exchange and partially withdraw from previous secrecy jurisdictions that open up to information exchange. While some of the economic literature emphasizes possible positive effects of tax havens, the present paper argues that proponents of positive effects may have started from questionable premises, in particular when it comes to the effects that tax havens have for emerging markets like China and India.
This paper studies the distributional consequences of a systematic variation in expenditure shares and prices. Using European Union Household Budget Surveys and Harmonized Index of Consumer Prices data, we construct household-specific price indices and reveal the existence of a pro-rich inflation in Europe. Particularly, over the period 2001-15, the consumption bundles of the poorest deciles in 25 European countries have, on average, become 10.5 percentage points more expensive than those of the richest decile. We find that ignoring the differential inflation across the distribution underestimates the change in the Gini (based on consumption expenditure) by up to 0.03 points. Cross-country heterogeneity in this change is large enough to alter the inequality ranking of numerous countries. The average inflation effect we detect is almost as large as the change in the standard Gini measure over the period of interest.
Digitalization expands the possibility for corporations to reduce taxes, mainly, but not exclusively, by allowing improved planning where profits can be shifted. Against this background, the European Commission and several countries emphatically demand and design new tax instruments. However, a selective turning away from internationally accepted principles of international taxation will bring up more questions than solutions. While there are good reasons to think about a fundamental regime switch in international corporate taxation, there are also good arguments for not turning to ad hoc measures that selectively target the relatively small market of Google and Facebook and raise only negligible tax revenues.
Greece: threatening recovery
(2015)
Despite the catastrophic phase between 2008 and the end of 2014, much of a previously unsustainable development has been corrected in Greece and there are clear signs that the deterioration came to a halt in 2014. But what is publicly known about the priorities of the newly elected Syriza government suggests that they may be going largely into the wrong direction.
This policy letter collects elementary economic statistics and provides a very basic look on Russian public finances (i) to inform the reader’s opinion on a possible planning process behind the war against Ukraine and (ii) to discuss prospects of an energy embargo and its capability to affect the stability of the Russian economy.
This note argues that in a situation of an inelastic natural gas supply a restrictive monetary policy in the euro zone could reduce the energy bill and therefore has additional merits. A more hawkish monetary policy may be able to indirectly use monopsony power on the gas market. The welfare benefits of such a policy are diluted to the extent that some of the supply (approximately 10 percent) comes from within the euro zone, which may give rise to distributional concerns.
Using a unique data set of regional inflation rates we are examining the extent and dynamics of inflation dispersion in major EMU countries before and after the introduction of the euro. For both periods, we find strong evidence in favor of mean reversion (ß-convergence) in inflation rates. However, half-lives to convergence are considerable and seem to have increased after 1999. The results indicate that the convergence process is nonlinear in the sense that its speed becomes smaller the further convergence has proceeded. An examination of the dynamics of overall inflation dispersion (ó-convergence) shows that there has been a decline in dispersion in the first half of the 1990s. For the second half of the 1990s, no further decline can be observed. At the end of the sample period, dispersion has even increased. The existence of large persistence in European inflation rates is confirmed when distribution dynamics methodology is applied. At the end of the paper we present evidence for the sustainability of the ECB's inflation target of an EMU-wide average inflation rate of less than but close to 2%. Klassifikation: E31, E52, E58
We use consumer price data for 205 cities/regions in 21 countries to study PPP deviations before, during and after the major currency crises of the 1990s. We combine data from industrialized nations in North America (Unites States, Canada and Mexico), Europe (Germany, Italy, Spain and Portugal), Asia (Japan and South Korea), and Oceania (Australia and New Zealand) with corresponding data from emerging market economies in South America (Argentina, Bolivia, Brazil, Columbia) and Asia (India, Indonesia, Malaysia, Philippines, Taiwan, Thailand). By doing so, we confirm previous results that both distance and border explain a significant amount of relative price variation across different locations. We also find that currency attacks had major disintegration effects by considerably increasing these border effects and by raising within-country relative price dispersion in emerging market economies. These effects are found to be quite persistent since relative price volatility across emerging markets today is still significantly larger than a decade ago.
We use consumer price data for 81 European cities (in Germany, Austria, Finland, Italy, Spain, Portugal and Switzerland) to study the impact of the introduction of the euro on goods market integration. Employing both aggregated and disaggregated consumer price index (CPI) data we confirm previous results which showed that the distance between European cities explains a significant amount of the variation in the prices of similar goods in different locations. We also find that the variation of relative prices is much higher for two cities located in different countries than for two equidistant cities in the same country. Under the EMU, the elimination of nominal exchange rate volatility has largely reduced these border effects, but distance and border still matter for intra-European relative price volatility.
European scholars, colonial administrators, missionaries, bibliophiles and others were the main collectors of Malay books in the nineteenth century, both in manuscript or printed form. Among these persons were many well-known names in the field of Malay literature and culture like Raffles, Marsden, Crawfurd, Klinkert, van der Tuuk, von Dewall, Roorda, Favre, Maxwell, Overbeck, Wilkinson and Skeat, to name only a few. Their collections were often handed over to public libraries where they form an important part of the relevant Oriental or Southeast Asian manuscript collections.
Therefore the knowledge of the intellectual culture of the Malay Peninsula and the Malay World in general depended very much on these manuscripts and printed books collected often by chance or in a rather unsystematic way. The collections reflect in a strong sense the interests of its administrative or philologist collectors: court histories, genealogies of aristocratic lineages, law collections (adat-istiadat as well as undangundang) or prose belles-lettres build a vast bulk of these collections, while Islamic religious texts and poetry forms popular in the 19th century (especially syair) are fairly underrepresented. Malay manuscripts and books located in religious institutions like mosques or pondok/pesantren schools have not been searched for; until today there are more or less no systematic studies of these collections. As in some statistics religious texts build about 20% of all existing Malay manuscripts, their neglect by Europeans scholars leads to a distorted view of the literary culture in the Malay language.
he predictive likelihood is of particular relevance in a Bayesian setting when the purpose is to rank models in a forecast comparison exercise. This paper discusses how the predictive likelihood can be estimated for any subset of the observable variables in linear Gaussian state-space models with Bayesian methods, and proposes to utilize a missing observations consistent Kalman filter in the process of achieving this objective. As an empirical application, we analyze euro area data and compare the density forecast performance of a DSGE model to DSGE-VARs and reduced-form linear Gaussian models.
We study the returns the venture capital and private equity investment from 221 venture capital and private equity funds that are part of 72 venture capital and private equity firms, 5040 entrepreneurial firms (3826 venture capital and 1214 private equity), and spanning 32 years (1971 - 2003) and 39 countries from North and South America, Europe and Asia. We make use of four main categories of variables to proxy for value-added activities and risks that explain venture capital and private equity returns: market and legal environment, VC characteristics, entrepreneurial firm characteristics, and the characteristics and structure of the investment. We show Heckman sample selection issues in regards to both unrealized and partially realized investments are important to consider for analysing the determinants of realized returns. We further compare the actual unrealized returns, as reported to investment managers, to the predicted unrealized returns based on the estimates of realized returns from the sample selection models. We show there exists significant systematic biases in the reporting of unrealized investments to institutional investors depending on the level of the earnings aggressiveness and disclosure indices in a country, as well as proxies for the degree of information asymmetry between investment managers and venture capital and private equity fund managers. Klassifikation: G24, G28, G31, G32, G35
European households face tremendous obstacles when intending to open a savings account outside their home country. The shortage of deposits has become a major reason for banks’ declining loan supply and ultimately is responsible for a substantial part of the investment weakness and GDP decline in affected European countries.
Policy makers have made important efforts to promote European deposit market integration and to stimulate cross border flows of savings within the European Union. But these efforts will only yield the intended benefits if a number of additional non-tariff trade barriers are removed. Currently, these barriers prevent households in surplus countries to transfer their savings to banks in deficit countries where their deposits are most urgently needed.
New provisioning rules introduced by IFRS 9 are expected to reduce the procyclicality of provisioning. Heterogeneity among banks in the procyclicality of provisioning may not only reflect the formal accounting rules, but also variation in discretionary provisioning policies. This paper presents empirical evidence on the heterogeneity of provisioning procyclicality among significant banks that are directly supervised by the ECB. In particular, this paper finds that provisioning is relatively procyclical at banks that have i) high loans-to-assets ratios, ii) high shares of non-interest income in total operating income, iii) low capitalization rates, and iv) low total assets. Supervisory guidance provided to banks on how to implement IFRS 9 has mostly been of a qualitative nature, and may prove inadequate to prevent an undesirably wide future variation in provisioning among EU banks.
This paper was provided at the request of the Committee on Economic and Monetary Affairs of the European Parliament and commissioned and drafted under the responsibility of the Economic Governance Support Unit (EGOV) of the European Parliament. It was originally published on the European Parliament’s webpage.
The paper examines the importance of international labour standards for ESG reporting. International labour standards exist today for almost all working conditions. There are many reasons why ESG criteria should be based on these standards. This is already happening to some extent. However, the references to international labor standards should be expanded and the existing references deepened.
The European low-carbon transition began in the last few decades and is accelerating to achieve net-zero emissions by 2050. This paper examines how climate-related transition indicators of a large European corporate firm relate to its CDS-implied credit risk across various time horizons. Findings show that firms with higher GHG emissions have higher CDS spreads at all tenors, including the 30-year horizon, particularly after the 2015 Paris Agreement, and in prominent industries such as Electricity, Gas, and Mining. Results suggest that the European CDS market is currently pricing, to some extent, albeit small, the exposure to transition risk for a firm across different time horizons. However, it fails to account for a company’s efforts to manage transition risks and its exposure to the EU Emissions Trading Scheme. CDS market participants seem to find challenging to risk-differentiate ETS-participating firms from other firms.
Central banks have recently introduced new policy initiatives, including a policy called ‘Quantitative Easing’ (QE). Since it has been argued by the Bank of England that “Standard economic models are of limited use in these unusual circumstances, and the empirical evidence is extremely limited” (Bank of England, 2009b), we have taken an entirely empirical approach and have focused on the QE-experience, on which substantial data is available, namely that of Japan (2001-2006). Recent literature on the effectiveness of QE has neglected any reference to final policy goals. In this paper, we adopt the view that ultimately effectiveness will be measured by whether it will be able to “boost spending” (Bank of England, 2009b) and “will ultimately be judged by their impact on the wider macroeconomy” (Bank of England, 2010). In line with a widely held view among leading macroeconomists from various persuasions, while attempting to stay agnostic and open-minded on the distribution of demand changes between real output and inflation, we have thus identified nominal GDP growth as the key final policy goal of monetary policy. The empirical research finds that the policy conducted by the Bank of Japan between 2001 and 2006 makes little empirical difference while an alternative policy targeting credit creation (the original definition of QE) would likely have been more successful.
Projected demographic changes in industrialized and developing countries vary in extent and timing but will reduce the share of the population in working age everywhere. Conventional wisdom suggests that this will increase capital intensity with falling rates of return to capital and increasing wages. This decreases welfare for middle aged asset rich households. This paper takes the perspective of the three demographically oldest European nations — France, Germany and Italy — to address three important adjustment channels to dampen these detrimental effects of aging in these countries: investing abroad, endogenous human capital formation and increasing the retirement age. Our quantitative finding is that endogenous human capital formation in combination with an increase in the retirement age has strong implications for economic aggregates and welfare, in particular in the open economy. These adjustments reduce the maximum welfare losses of demographic change for households alive in 2010 by about 2.2 percentage points in terms of a consumption equivalent variation.
Motivated by the recent discussion of the declining importance of deposits as banks´ major source of funding we investigate which factors determine funding costs at local banks. Using a panel data set of more than 800 German local savings and cooperative banks for the period from 1998 to 2004 we show that funding costs are not only driven by the relative share of comparatively cheap deposits of bank´s liabilities but among other factors especially by the size of the bank. In our empirical analysis we find strong and robust evidence that, ceteris paribus, smaller banks exhibit lower funding costs than larger banks suggesting that small banks are able to attract deposits more cheaply than their larger counterparts. We argue that this is the case because smaller banks interact more personally with customers, operate in customers´ geographic proximity and have longer and stronger relationships than larger banks and, hence, are able to charge higher prices for their services. Our finding of a strong influence of bank size on funding costs is also in an in- ternational context of great interest as mergers among small local banks - the key driver of bank growth - are a recent phenomenon not only in European banking that is expected to continue in the future. At the same time, net interest income remains by far the most important source of revenue for most local banks, accounting for approximately 70% of total operating revenues in the case of German local banks. The influence of size on funding costs is of strong economic relevance: our results suggest that an increase in size by 50%, for example, from EUR 500 million in total assets to EUR 750 million (exemplary for M&A transactions among local banks) increases funding costs, ceteris paribus, by approximately 18 basis points which relates to approx. 7% of banks´ average net interest margin.
This paper is one of the first to analyse political influence on state-owned savings banks in a developed country with an established financial market: Germany. Combining a large dataset with financial and operating figures of all 457 German savings banks from 1994 to 2006 and information on over 1,250 local elections during this period we investigate the change in business behavior around elections. We find strong indications for political inflence: the probability that savings banks close branches, lay-off employees or engage in merger activities is significantly reduced around elections. At the same time they tend to increase their extraordinary spendings, which include support for social and cultural events in the area, on average by over 15%. Finally, we find that savings banks extend significantly more loans to their corporate and private customers in the run-up to an election. In further analyses, we show that the magnitude of political influence depends on bank specific, economical and political circumstances in the city or county: political influence seems to be facilitated by weak political majorities and profitable banks. Banks in economically weak areas seem to be less prone to political influence.
We study whether prices of traded options contain information about future extreme market events. Our option-implied conditional expectation of market loss due to tail events, or tail loss measure, predicts future market returns, magnitude, and probability of the market crashes, beyond and above other option-implied variables. Stock-specific tail loss measure predicts individual expected returns and magnitude of realized stock-specific crashes in the cross-section of stocks. An investor that cares about the left tail of her wealth distribution benefits from using the tail loss measure as an information variable to construct managed portfolios of a risk-free asset and market index.
In this speech (given at the CFSresearch conference on the Implementation of Price Stability held at the Bundesbank Frankfurt am Main, 10. - 12. Sept 1998), John Vickers discusses theoretical and practical issues relating to inflation targeting as used in the United Kingdom doing the past six years. After outlining the role of the Bank s Monetary Policy Committee, he considers the Committee s task from a theoretical perspective, beforediscussing the concept and measurement of domestically generated inflation.
Motivated by the U.S. events of the 2000s, we address whether a too low for too long interest rate policy may generate a boom-bust cycle. We simulate anticipated and unanticipated monetary policies in state-of-the-art DSGE models and in a model with bond financing via a shadow banking system, in which the bond spread is calibrated for normal and optimistic times. Our results suggest that the U.S. boom-bust was caused by the combination of (i) too low for too long interest rates, (ii) excessive optimism and (iii) a failure of agents to anticipate the extent of the abnormally favorable conditions.
In this paper, I introduce lumpy micro-level capital adjustment into a sticky information general equilibrium model. Lumpy adjustment arises because of inattentiveness in capital investment decisions instead of the more common assumption of non-convex adjustment costs. The model features inattentiveness as the only source of stickiness. I find that the model with lumpy investment yields business cycle dynamics which differ substantially from those of an otherwise identical model with frictionless investment and are much more consistent with the empirical evidence. These results therefore strengthen the case in favour of the relevance of microeconomic investment lumpiness for the business cycle.
One of the dangers of harmonisation and unification processes taking place within the framework of the EU is that they may result in the codification of the lowest common denominator. This is precisely what is threatening to happen in respect of assignment. Referring the transfer of receivables by way of assignment to the law of the assignor’s residence, as article 13 of the Proposal does, would be opting for the most conservative solution and would for many Member States be a step backward rather than forward. A conflict rule referring assignment to the law of the assignor's residence is too rigid to do justice to the dynamic nature of assignments in cross-border transactions and it is unjustly one-sided. It offers no real advantages when compared to other conflict rules; it even has serious disadvantages which make the conflict rule unsuitable for efficient assignment-based cross-border transactions. It is not unconceivable that this conflict rule would even be contrary to the fundamental freedoms of the ECTreaty. The Community legislators in particular should be careful not to needlessly adopt rules which create insurmountable obstacles for cross-border business where choice-of-law by the parties would perfectly do. Community legislation has a special responsibility to create a smooth legal environment for single market transactions.
Using two datasets containing demographically representative samples of the Dutch population, I study how lifetime experiences of aggregate labor market conditions affect personality. Three sets of findings are reported. First, experienced aggregate unemployment is negatively correlated with the levels of all Big Five personality traits, except for conscientiousness (no significant correlation). Second, in panel data models with individual fixed effects I find that changes in experienced aggregate unemployment cause changes in emotional stability and agreeableness for men, and conscientiousness for women. The correlation is positive, and effects are economically large. Thirdly, I report suggestive evidence that the main driver is experienced aggregate unemployment, instead of other macroeconomic variables as experienced GDP, stock market returns or inflation. Taken together, these findings suggest that changes in Big Five personality traits are systematically related to experienced aggregate labor market conditions.
Do household inflation expectations affect consumption-savings decisions? We link survey data on quantitative inflation expectations to administrative data on income and wealth. We document that households with higher inflation expectations save less. Estimating panel data models with year and household fixed effects, we find that a one percentage point increase in a household's inflation expectation over time is associated with a 250-400 euro reduction in the household's change in net worth per year on average. We also document that households with higher inflation expectations are more likely to acquire a car and acquire higher-value cars. In addition, we provide a quantitative model of household-level inflation expectations.
A number of recent studies have concluded that consumer spending patterns over the month are closely linked to the timing of income receipt. This correlation is interpreted as evidence of hyperbolic discounting. I re-examine patterns of spending in the diary sample of the U.S. Consumer Expenditure Survey, incorporating information on the timing of the main consumption commitment for most households - their monthly rent or mortgage payment. I find that non-durable and food spending increase with 30-48% on the day housing payments are made, with smaller increases in the days after. Moreover, households with weekly, biweekly and monthly income streams but the same timing of rent/mortgage payments have very similar consumption patterns. Exploiting variation in income, I find that households with extra liquidity decrease non-durable spending around housing payments, especially those households with a large budget share of housing.
This paper presents evidence that spillovers through shifts in bank lending can help explain the pattern of contagion. To test the role of bank lending in transmitting currency crises we examine a panel of data on capital flows to 30 emerging markets disaggregated by 11 banking centers. In addition we study a cross-section of emerging markets for which we construct a number of measures of competition for bank funds. For the Mexican and Asian crises, we find that the degree to which countries compete for funds from common bank lenders is a fairly robust predictor of both disaggregated bank flows and the incidence of a currency crisis. In the Russian crisis, the common bank lender helps to predict the incidence of contagion but there is also evidence of a generalized outflow from all emerging markets. We test extensively for robustness to sample, specification and definition of the common bank lender effect. Overall our findings suggest that spillovers through banking centers may be more important in explaining contagion than similarities in macro-economic fundamentals and even than trade linkage.
This paper considers a trading game in which sequentially arriving liquidity traders either opt for a market order or for a limit order. One class of traders is considered to have an extended trading horizon, implying their impatience is linked to their trading orientation. More specifically, sellers are considered to have a trading horizon of two periods, whereas buyers only have a single-period trading scope (the extended buyer-horizon case is completely symmetric). Clearly, as the life span of their submitted limit orders is longer, this setting implies sellers are granted a natural advantage in supplying liquidity. This benefit is hampered, however, by the direct competition arising between consecutively arriving sellers. Closed-form characterizations for the order submission strategies are obtained when solving for the equilibrium of this dynamic game. These allow to examine how these forces affect traders´ order placement decisions. Further, the analysis yields insight into the dynamic process of price formation and into the market clearing process of a non-intermediated, order driven market.
In an earlier paper, I proposed a system for evaluating the relative descriptivity of lexical items in a consistent manner in terms of the interrelations of three metrics. The first of these, including five possible degrees of descriptivity, is based on the premise that the sum of the meaningful parts of a given form is or is not equal to the meaning of the whole. The second, also composed of five degrees, is based on paraphrase-term relations in which the logical quantifiers: all, some and no, are applied to the terms of the paraphrase in one test and to the meaningful parts of the term (linguistic form) in the reversibility test. Both tests are applied in the form of logical propositions. The third metric, with three degrees, deals with the relative explicitness of the meaningful parts of a given form: explicit, implicit or neither. […] This system was then tested in a pilot study involving the fairly limited and semantically homogeneous lexical domain of body-part terms in a specific language, Finnish. The purpose of the present paper is to subject comparable data from other languages to the same kind of analysis and compare the results in order to ascertain whether the generalizations arrived at with the Finnish data also hold for the other languages or, more specifically, which of these generalizations are more or less universal and which language or language-type specific? The additional languages to be examined here are: French, German, Ewe, Maasai and Swahili.
Three quantificational approaches to the measurement of lexical descriptivity are proposed, based on: the semantic sum of the parts of a lexeme is equal to the whole, paraphrase-term and term-paraphrase congruence, explicitness of semantic elements of a construction. Combination of all possible values into tripartite sets and then into equipollent groups results in a system composed of 12 grades. This system was tested with a semantic domain of the Finnish lexicon: body-part terms. The descriptivity indices for each lexical item were correlated with natural divisions of the body, construction-motivation types (form, function, location), grammatical construction types (endo- and exocentric compounds, derived forms, metaphors), and loanwords. These comparisons result in a number of grade profiles whereby specific descriptivity grades are characteristically associated with one or more types of body section, construction motivation, and grammatical construction. Diachronic and synchronic evidence points overwhelmingly to a process of semantic narrowing in the development of descriptive words and labels from phrases or sentences.
Self-control failure is among the major pathologies (Baumeister et al. (1994)) affecting individual investment decisions which has hardly been measurable in empirical research. We use cigarette addiction identified from checking account transactions to proxy for low self-control and compare over 5,000 smokers to 14,000 nonsmokers. Smokers self-directing their investment trade more frequently, exhibit more biases and achieve lower portfolio returns. We also find that smokers, some of which might be aware of their limited levels of self-control, exhibit a higher propensity than nonsmokers to delegate decision making to professional advisors and fund managers. We document that such precommitments work successfully.
Discussions regarding the planned European Deposit Insurance Scheme (EDIS), the missing third pillar of the European Banking Union, have been ongoing since the Commission published its initial legisla-tive proposal in 2015. A breakthrough in negotiations has yet to be achieved. The gridlock on EDIS is most commonly attributed to moral hazard concerns over insufficient risk reduction harboured on the side of northern member states, particularly Germany, due to the weak state of some other member states’ banking sectors. While moral hazard based on uneven risk reduction is helpful for explaining divergent member-state preferences on the scope of necessary risk reduction, this does not explain preferences on the institutional design of EDIS. In this paper, we argue that contrary to persistent differences on necessary risk reduction, preferences regarding the institutional design of EDIS have become more closely aligned. We analyse how preferences on EDIS developed in the key member states of Germany, France, and Italy. In all sampled countries, we find path-dependent benefits con-nected to the current design of national Deposit Guarantee Schemes (DGS) that shifted preferences of the banking sector or significant subsectors in favour of retaining national DGSs. Overall, given that a compromise on risk-reduction can be accomplished, we argue that current preferences in these key member states provide an opportunity to implement EDIS in the form of a reinsurance system that maintains national DGSs in combination with a supranational fund.
Much has been written on the success of the Indian software industry, enumerating systemic factors like first-class higher education and research institutions, both public and private; low labour costs, stimulating (state) policies etc. However, although most studies analyzing the 'Indian' software industry cover essentially the South (and West) Indian clusters, this issue has not been tackled explicitly. This paper supplements the economic geography explanations mentioned above with the additional factor social capital, which is not only important within the region, but also in transnational (ethnic) networks linking Indian software clusters with the Silicon Valley. In other words, spatial proximity is complemented with cultural proximity, thereby, extending the system of innovation. The main hypothesis is that some Indian regions are more apt to economic development and innovation due to their higher affinity to education and learning, as well as, their more general openness, which has been a main finding of my interviews. In addition, the transnational networks of Silicon Valley Indians seem to be dominated by South Indians, thus, corroborating the regional clustering of the Indian software industry. JEL Classifications: O30, R12, Z13, L86
This paper aims to analyze the impact of different types of venture capitalists on the performance of their portfolio firms around and after the IPO. We thereby investigate the hypothesis that different governance structures, objectives and track record of different types of VCs have a significant impact on their respective IPOs. We explore this hypothesis by using a data set embracing all IPOs which occurred on Germany's Neuer Markt. Our main finding is that significant differences among the different VCs exist. Firms backed by independent VCs perform significantly better two years after the IPO compared to all other IPOs and their share prices fluctuate less than those of their counterparts in this period of time. Obviously, independent VCs, which concentrated mainly on growth stocks (low book-to-market ratio) and large firms (high market value), were able to add value by leading to less post-IPO idiosyncratic risk and more return (after controlling for all other effects). On the contrary, firms backed by public VCs (being small and having a high book-to-market ratio) showed relative underperformance. Klassifikation: G10, G14, G24 . 29th January 2004 .
This paper sets out to analyze the influence of different types of venture capitalists on the performance of their portfolio firms around and after IPO. We investigate the hypothesis that different governance structures, objectives, and track records of different types of VCs have a significant impact on their respective IPOs. We explore this hypothesis using a data set embracing all IPOs that have occurred on Germany's Neuer Markt. Our main finding is that significant differences among the different VCs exist. Firms backed by independent VCs perform significantly better two years after IPO as compared to all other IPOs, and their share prices fluctuate less than those of their counterparts in this period of time. On the contrary, firms backed by public VCs show relative underperformance. The fact that this could occur implies that market participants did not correctly assess the role played by different types of VCs.
Using a unique, hand-collected database of all venture-backed firms listed on Germany´s Neuer Markt, we analyze the history of venture capital financing of these firms before the IPO and the behavior of venture capitalists at the IPO. We can detect significant differences in the behavior and characteristics of German vs. foreign venture capital firms. The discrepancy in the investment and divestment strategies may be explained by the grandstanding phenomenon, the value-added hypothesis and certification issues. German venture capitalists are typically younger and smaller than their counterparts from abroad. They syndicate less. The sectoral structure of their portfolios differs from that of foreign venture capital firms. We also find that German venture capitalists typically take companies with lower offering volumes on the market. They usually finance firms in a later stage, carry through fewer investment rounds and take their portfolio firms public earlier. In companies where a German firm is the lead venture capitalist, the fraction of equity held by the group of venture capitalists is lower, their selling intensity at the IPO is higher and the committed lock-up period is longer.
We analyze the venture capitalist´s decision on the timing of the IPO, the offer price and the fraction of shares he sells in the course of the IPO. A venture capitalist may decide to take a company public or to liquidate it after one or two financing periods. A longer venture capitalist´s participation in a firm (later IPO) may increase its value while also increasing costs for the venture capitalist. Due to his active involvement, the venture capitalist knows the type of firm and the kind of project he finances before potential new investors do. This information asymmetry is resolved at the end of the second period. Under certain assumptions about the parameters and the structure of the model, we obtain a single equilibrium in which high-quality firms separate from low-quality firms. The latter are liquidated after the first period, while the former go public either after having been financed by the venture capitalist for two periods or after one financing period using a lock-up. Whether a strategy of one or two financing periods is chosen depends on the consulting intensity of the project and / or on the experience of the venture capitalist. In the separating equilibrium, the offer price corresponds to the true value of the firm. An earlier version of this paper appeared as: The Decision of Venture Capitalists on Timing and Extent of IPOs (ZEW Discussion Paper No. 03-12). This version July 2003.
This paper analyses the long-term effects of improved small-scale lending, often provided by microfinance institutions set up with the support of development aid. The analysis shows that some common assumptions about microfinance are not true at all: First, it shows that the impact on income will accrue not to the microenterprises themselves, but rather to the consumers of their products. Second, microfinance will have a significant positive effect on the wage levels of employees in the informal sector. Third, microfinance will cause high growth rates in the informal production sector, whereas the trade sector will either contract or at best grow very little.
The theoretical derivation of credit market segmentation as the result of a free market process
(2003)
Information asymmetries make it difficult for banks to assess accurately whether specific entrepreneurs are able and/or willing to repay their loans. This leads to implicit interest rate ceilings, i.e. banks "refuse" to increase their interest rates beyond this ceiling as this would lower their net returns. Although the maximum interest rate increases as the size of enterprises decreases, such ceilings nonetheless constrain the banks’ ability to set interest rates at a level that would enable them to cover costs. If transaction costs are high, the total costs associated with granting small and medium-sized loans will exceed the maximum average return which the banks can earn by issuing such loans. For this reason, banks do not lend to small and medium-sized enterprises, and, as a consequence, these businesses have no access to formal sector loans. Because micro and small enterprises have a very high RoI, it is worthwhile for them to rely on expensive informal loans to finance their operations, at least until they reach a certain size. Once they have reached this size, however, it does not make economic sense for them to continue taking out informal credits, and thus they face a growth constraint imposed by the credit market. Medium-sized enterprises earn a lower RoI than small ones, which is why borrowing in the informal credit market is not a worthwhile option for them. Moreover, they do not have access to credit from formal financial institutions, and are thus excluded from obtaining any kind of financing in either of the two credit markets. As the result of free, unregulated market forces we get a stable equilibrium in which the credit market is segmented into an informal (small loan) segment, a formal (large loan) segment and, in between, a "non-market" (medium loan) segment.
The extension of long-term loans, e.g. to finance housing, is adversely affected by inflation. For one thing, the higher nominal interest rates charged by the banks in response to inflation mean that borrowers have to make (nominally) higher interest payments, which unnecessarily reduces their borrowing capacity. For another, long-term loans with variable interest rates increase the probability that borrowers will become unable to meet their payment obligations. The present paper examines these two assertions in detail. At the same time, it presents a concept for substantially reducing the weaknesses of conventional lending methodologies. We start by investigating the consequences of a stable inflation rate on the borrowing capacity of credit clients, then go on to analyze the impact of fluctuating inflation rates on the risk of default.