Refine
Year of publication
- 2023 (141) (remove)
Document Type
- Working Paper (99)
- Part of Periodical (23)
- Article (16)
- Preprint (2)
- Doctoral Thesis (1)
Language
- English (141) (remove)
Has Fulltext
- yes (141)
Is part of the Bibliography
- no (141)
Keywords
- Monetary Policy (4)
- Regulation (4)
- Sustainable Finance (4)
- COVID-19 (3)
- Capital Markets Union (3)
- DSGE (3)
- ESG (3)
- Machine Learning (3)
- Numerical accuracy (3)
- Solution methods (3)
Institute
- Wirtschaftswissenschaften (141) (remove)
Contagious stablecoins?
(2023)
Can competing stablecoins produce efficient and stable outcomes? We study competition among stablecoins pegged to a stable currency. They are backed by interest-bearing safe assets and can be redeemed with the issuer or traded in a secondary market. If an issuer sticks to an appropriate investment and redemption rule, its stablecoin is invulnerable to runs. Since an issuer must pay interest on its stablecoin if other issuers also pay interest, competing interest-bearing stablecoins, however, are contagious and can render the economy inefficient and unstable. The efficient allocation is uniquely implemented when regulation prevents interest payments on stablecoins.
The Eurosystem and the Deutsche Bundesbank will incur substantial losses in 2023 that are likely to persist for several years. Due to the massive purchases of securities in the last 10 years, especially of government bonds, the banks' excess reserves have risen sharply. The resulting high interest payments to the banks since the turnaround in monetary policy, with little income for the large-scale securities holdings, led to massive criticism. The banks were said to be making "unfair" profits as a result, while the fiscal authorities had to forego the previously customary transfers of central bank profits. Populist demands to limit bank profits by, for example, drastically increasing the minimum reserve ratios in the Eurosystem to reduce excess reserves are creating new severe problems and are neither justified nor helpful. Ultimately, the EU member states have benefited for a very long time from historically low interest rates because of the Eurosystem's extraordinary loose monetary policy and must now bear the flip side consequences of the massive expansion of central bank balance sheets during the necessary period of monetary policy normalisation.
This cumulative dissertation contains four self-contained chapters on stochastic games and learning in intertemporal choice.
Chapter 1 presents an experiment on value learning in a setting where actions have both immediate and delayed consequences. Subjects make a series of choices between abstract options, with values that have to be learned by sampling. Each option is associated with two payoff components: One is revealed immediately after the choice, the other with one round delay. Objectively, both payoff components are equally important, but most subjects systematically underreact to the delayed consequences. The resulting behavior appears impatient or myopic. However, there is no inherent reason to discount: All rewards are paid simultaneously, after the experiment. Elicited beliefs on the value of options are in accordance with choice behavior. These results demonstrate that revealed impatience may arise from frictions in learning, and that discounting does not necessarily reflect deep time preferences. In a treatment variation, subjects first learn passively from the evidence generated by others, before then making a series of own choices. Here, the underweighting of delayed consequences is attenuated, in particular for the earliest own decisions. Active decision making thus seems to play an important role in the emergence of the observed bias.
Chapter 2 introduces and proves existence of Markov quantal response equilibrium (QRE), an application of QRE to finite discounted stochastic games. We then study a specific case, logit Markov QRE, which arises when players react to total discounted payoffs using the logit choice rule with precision parameter λ. We show that the set of logit Markov QRE always contains a smooth path that leads from the unique QRE at λ = 0 to a stationary equilibrium of the game as λ goes to infinity. Following this path allows to solve arbitrary finite discounted stochastic games numerically; an implementation of this algorithm is publicly available as part of the package sgamesolver. We further show that all logit Markov QRE are ε-equilibria, with a bound for ε that is independent of the payoff function of the game and decreases hyperbolically in λ. Finally, we establish a link to reinforcement learning, by characterizing logit Markov QRE as the stationary points of a game dynamic that arises when all players follow the well-established reinforcement learning algorithm expected SARSA.
Chapter 3 introduces the logarithmic stochastic tracing procedure, a homotopy method to compute stationary equilibria for finite and discounted stochastic games. We build on the linear stochastic tracing procedure (Herings and Peeters 2004), but introduce logarithmic penalty terms as a regularization device, which brings two major improvements. First, the scope of the method is extended: it now has a convergence guarantee for all games of this class, rather than just generic ones. Second, by ensuring a smooth and interior solution path, computational performance is increased significantly. A ready-to-use implementation is publicly available. As demonstrated here, its speed compares quite favorable to other available algorithms, and it allows to solve games of considerable size in reasonable times. Because the method involves the gradual transformation of a prior into equilibrium strategies, it is possible to search the prior space and uncover potentially multiple equilibria and their respective basins of attraction. This also connects the method to established theory of equilibrium selection.
Chapter 4 introduces sgamesolver, a python package that uses the homotopy method to compute stationary equilibria of finite discounted stochastic games. A short user guide is complemented with discussion of the homotopy method, the two implemented homotopy functions logit Markov QRE and logarithmic tracing, and the predictor-corrector procedure and its implementation in sgamesolver. Basic and advanced use cases are demonstrated using several example games. Finally, we discuss the topic of symmetries in stochastic games.
The pricing of digital art
(2023)
The intersection of recent advancements in generative artificial intelligence and blockchain technology has propelled digital art into the spotlight. Digital art pricing recognizes that owners derive utility beyond the artwork’s inherent value. We incorporate the consumption utility associated with digital art and model the stochastic discount factor and risk premiums. Furthermore, we conduct a calibration analysis to analyze the effects of shifts in the real and digital economy. Higher returns are required in a digital market upswing due to increased exposure to systematic risk and digital art prices are especially responsive to fluctuations in business cycles within digital markets.
Using a field study at a German brokerage, we investigate advised individual investors’ behavior and outcomes after self-selecting into a flat-fee scheme (percentage of portfolio value) for mutual funds. In a difference-in-differences setting, we compare 699 switchers to propensity-score-matched advisory clients who remained in the commission-based scheme. Switchers increase their portfolio values, improve portfolio diversification, and increase their portfolio performance. They also demand more financial advice and follow more advisor recommendations. We argue that switchers attribute a higher quality to the unchanged advisory services.
The recent COVID-19 pandemic represents an unprecedented worldwide event to study the influence of related news on the financial markets, especially during the early stage of the pandemic when information on the new threat came rapidly and was complex for investors to process. In this paper, we investigate whether the flow of news on COVID-19 had an impact on forming market expectations. We analyze 203,886 online articles dealing with COVID-19 and published on three news platforms (MarketWatch.com, NYTimes.com, and Reuters.com) in the period from January to June 2020. Using machine learning techniques, we extract the news sentiment through a financial market-adapted BERT model that enables recognizing the context of each word in a given item. Our results show that there is a statistically significant and positive relationship between sentiment scores and S&P 500 market. Furthermore, we provide evidence that sentiment components and news categories on NYTimes.com were differently related to market returns.
Can consumption-based mechanisms generate positive and time-varying real term premia as we see in the data? I show that only models with time-varying risk aversion or models with high consumption risk can independently produce these patterns. The latter explanation has not been analysed before with respect to real term premia, and it relies on a small group of investors exposed to high consumption risk. Additionally, it can give rise to a “consumption-based arbitrageur” story of term premia. In relation to preferences, I consider models with both time-separable and recursive utility functions. Specifically for recursive utility, I introduce a novel perturbation solution method in terms of the intertemporal elasticity of substitution. This approach has not been used before in such models, it is easy to implement, and it allows a wide range of values for the parameter of intertemporal elasticity of substitution.
The complexities of geopolitical events, financial and fiscal crises, and the ebb and flow of personal life circumstances can weigh heavily on individuals’ minds as they make critical economic decisions. To investigate the impact of cognitive load on such decisions, the authors conducted an incentivized online experiment involving a representative sample of 2,000 French households. The results revealed that exposure to a taxing and persistent cognitive load significantly reduced consumption, particularly for individuals under the threat of furlough, while simultaneously increasing their account balances, particularly for those not facing such employment uncertainty. These effects were not driven by supply constraints or a worsening of credit constraints. Instead, cognitive load primarily affected the optimality of the chosen policy rules and impaired the ability of the standard economic model to accurately predict consumption patterns, although this effect was less pronounced among college-educated subjects
We investigate how unconventional monetary policy, via central banks’ purchases of corporate bonds, unfolds in credit-saturated markets. While this policy results in a loosening of credit market conditions as intended by policymakers, we report two unintended side effects. First, the policy impacts the allocation of credit among industries. Affected banks reallocate loans from investment-grade firms active on bond markets almost entirely to real estate asset managers. Other industries do not obtain more loans, particularly real estate developers and construction firms. We document an increase in real estate prices due to this policy, which fuels real estate overvaluation. Second, more loan write-offs arise from lending to these firms, and banks are not compensated for this risk by higher interest rates. We document a drop in bank profitability and, at the same time, a higher reliance on real estate collateral. Our findings suggest that central banks’ quantitative easing has substantial adverse effects in credit-saturated economies.
We conduct a field experiment with clients of a German universal bank to explore the impact of peer information on sustainable retail investments. Our results show that infor-mation about peers’ inclination towards sustainable investing raises the amount allocated to stock funds labeled sustainable, when communicated during a buying decision. This effect is primarily driven by participants initially underestimating peers’ propensity to invest sustainably. Further, treated individuals indicate an increased interest in addi-tional information on sustainable investments, primarily on risk and return expectations. However, when analyzing account-level portfolio holding data over time, we detect no spillover effects of peer information on later sustainable investment decisions.
Many consumers care about climate change and other externalities associated with their purchases. We analyze the behavior and market effects of such “socially responsible consumers” in three parts. First, we develop a flexible theoretical framework to study competitive equilibria with rational consequentialist consumers. In violation of price taking, equilibrium feedback non-trivially dampens a consumer’s mitigation efforts, undermining responsible behavior. This leads to a new type of market failure, where even consumers who fully “internalize the externality” overconsume externality-generating goods. At the same time, socially responsible consumers change the relative effectiveness of taxes, caps, and other policies in lowering the externality. Second, since consumer beliefs about and preferences over dampening play a crucial role in our framework, we investigate them empirically via a tailored survey. Consistent with our model, consumers are predominantly consequentialist, and on average believe in dampening. Inconsistent with our model, however, many consumers fail to anticipate dampening. Third, therefore, we analyze how such “naive” consumers modify our theoretical conclusions. Naive consumers behave more responsibly than rational consumers in a single-good economy, but may behave less responsibly in a multi-good economy with cross-market spillovers. A mix of naive and rational consumers may yield the worst outcomes.
This paper investigates stock market reaction to greenwashing by analyzing a new channel whereby companies change their names to green-related ones (i.e., names that evoke green and sustainable sentiments) to persuade the public that their activities are green. The findings reveal a striking positive stock price reaction to the announcement of corporate name changes to green-related names only for companies not involved in green activities at the time of the announcement. However, over an extended period of time, companies unrelated to green activities experience substantial negative abnormal returns if they fail to align their operational focus with the new name after the change.
How does group identity affect belief formation? To address this question, we conduct a series of online experiments with a representative sample of individuals in the US. Using the setting of the 2020 US presidential election, we find evidence of intergroup preference across three distinct components of the belief formation cycle: a biased prior belief, avoid-ance of outgroup information sources, and a belief-updating process that places greater (less) weight on prior (new) information. We further find that an intervention reducing the salience of information sources decreases outgroup information avoidance by 50%. In a social learn-ing context in wave 2, we find participants place 33% more weight on ingroup than outgroup guesses. Through two waves of interventions, we identify source utility as the mechanism driving group effects in belief formation. Our analyses indicate that our observed effects are driven by groupy participants who exhibit stable and consistent intergroup preferences in both allocation decisions and belief formation across all three waves. These results suggest that policymakers could reduce the salience of group and partisan identity associated with a policy to decrease outgroup information avoidance and increase policy uptake.
This paper applies structure preserving doubling methods to solve the matrix quadratic underlying the recursive solution of linear DSGE models. We present and compare two Structure-Preserving Doubling Algorithms ( SDAs) to other competing methods – the QZ method, a Newton algorithm, and an iterative Bernoulli approach – as well as the related cyclic and logarithmic reduction algorithms. Our comparison is completed using nearly 100 different models from the Macroeconomic Model Data Base (MMB) and different parameterizations of the monetary policy rule in the medium scale New Keynesian model of Smets and Wouters (2007) iteratively. We find that both SDAs perform very favorably relative to QZ, with generally more accurate solutions computed in less time. While we collect theoretical convergence results that promise quadratic convergence rates to a unique stable solution, the algorithms may fail to converge when there is a breakdown due to singularity of the coefficient matrices in the recursion. One of the proposed algorithms can overcome this problem by an appropriate (re)initialization. This SDA also performs particular well in refining solutions of different methods or from nearby parameterizations.
Whatever it takes to understand a central banker : embedding their words using neural networks
(2023)
Dictionary approaches are at the forefront of current techniques for quantifying central bank communication. In this paper, the author propose a novel language model that is able to capture subtleties of messages such as one of the most famous sentences in central bank communications when ECB President Mario Draghi stated that "within [its] mandate, the ECB is ready to do whatever it takes to preserve the euro".
The authors utilize a text corpus that is unparalleled in size and diversity in the central bank communication literature, as well as introduce a novel approach to text quantication from computational linguistics. This allows them to provide high-quality central bank-specific textual representations and demonstrate their applicability by developing an index that tracks deviations in the Fed's communication towards inflation targeting. Their findings indicate that these deviations in communication significantly impact monetary policy actions, substantially reducing the reaction towards inflation deviation in the US.
Christine Laudenbach and Vincent Lindner: To promote financial education among children, young people, and adults in the long term, comprehensive information services must reach the entire population in Germany with the help of cooperation partners. Talking about finances can no longer be a taboo subject.
Standard applications of the consumption-based asset pricing model assume that goods and services within the nondurable consumption bundle are substitutes. We estimate substitution elasticities between different consumption bundles and show that households cannot substitute energy consumption by consumption of other nondurables. As a consequence, energy consumption affects the pricing function as a separate factor. Variation in energy consumption betas explains a large part of the premia related to value, investment, and operating profitability. For example, value stocks are typically more energy-intensive than growth stocks and thus riskier, since they suffer more from the oil supply shocks that also affect households.
We propose a model with mean-variance foreign investors who exhibit a convex disutility associated to brown bond holdings. The model predicts that bond green premia should be smaller in economies with a closer financial account and highly volatile exchange rates. This happens because foreign intermediaries invest relatively less in such economies, and this lowers the marginal disutility of investing in polluting activities. We find strong empirical evidence in favor of this hypothesis using a global bond market dataset. Exchange rate volatility and financial account openness are thus able to explain the higher financing costs of green projects in emerging markets relative to advanced economies, especially when green bonds are denominated in local currency: a disadvantage that we can call the "green sin" of emerging economies.
This study looks at potential windfall profits for the four banking acquisitions in 2023. Based on accounting figures, an FT article states that a total of USD 44bn was left on the table. We see accounting figures as a misleading analysis. By estimating marked-based cumulative abnormal returns (CAR), we find positive abnormal returns in all four cases which when made quantifiable, are around half of the FT’s accounting figures. Furthermore, we argue that transparent auctions with enough bidders should be preferred to negotiated bank sales.
This document was provided/prepared by the Economic Governance and EMU Scrutiny Unit at the request of the ECON Committee.
This paper develops and implements a backward and forward error analysis of and condition numbers for the numerical stability of the solutions of linear dynamic stochastic general equilibrium (DSGE) models. Comparing seven different solution methods from the literature, I demonstrate an economically significant loss of accuracy specifically in standard, generalized Schur (or QZ) decomposition based solutions methods resulting from large backward errors in solving the associated matrix quadratic problem. This is illustrated in the monetary macro model of Smets and Wouters (2007) and two production-based asset pricing models, a simple model of external habits with a readily available symbolic solution and the model of Jermann (1998) that lacks such a symbolic solution - QZ-based numerical solutions miss the equity premium by up to several annualized percentage points for parameterizations that either match the chosen calibration targets or are nearby to the parameterization in the literature. While the numerical solution methods from the literature failed to give any indication of these potential errors, easily implementable backward-error metrics and condition numbers are shown to successfully warn of such potential inaccuracies. The analysis is then performed for a database of roughly 100 DSGE models from the literature and a large set of draws from the model of Smets and Wouters (2007). While economically relevant errors do not appear pervasive from these latter applications, accuracies that differ by several orders of magnitude persist.
In recent years, European regulators have debated restricting the time an online tracker can track a user to protect consumer privacy better. Despite the significance of these debates, there has been a noticeable absence of any comprehensive cost-benefit analysis. This article fills this gap on the cost side by suggesting an approach to estimate the economic consequences of lifetime restrictions on cookies for publishers. The empirical study on cookies of 54,127 users who received ∼128 million ad impressions over ∼2.5 years yields an average cookie lifetime of 279 days, with an average value of €2.52 per cookie. Only ∼13 % of all cookies increase their daily value over time, but their average value is about four times larger than the average value of all cookies. Restricting cookies’ lifetime to one year (two years) could potentially decrease their lifetime value by ∼25 % (∼19 %), which represents a potential decrease in the value of all cookies of ∼9 % (∼5%). Most cookies, however, would not be affected by lifetime restrictions of 12 or 24 months as 72 % (85 %) of the users delete their cookies within 12 (24) months. In light of the €10.60 billion cookie-based display ad revenue in Europe, such restrictions would endanger €904 million (€576 million) annually, equivalent to €2.08 (€1.33) per EU internet user. The article discusses these results' marketing strategy challenges and opportunities for advertisers and publishers.
Even as online advertising continues to grow, a central question remains: Who to target? Yet, advertisers know little about how to select from the hundreds of audience segments for targeting (and combinations thereof) for a profitable online advertising campaign. Utilizing insights from a field experiment on Facebook (Study 1), we develop a model that helps advertisers solve the cold-start problem of selecting audience segments for targeting. Our model enables advertisers to calculate the break-even performance of an audience segment to make a targeted ad campaign at least as profitable as an untargeted one. Advertisers can use this novel model to decide whether to test specific audience segments in their campaigns (e.g., in randomized controlled trials). We apply our model to data from the Spotify ad platform to study the profitability of different audience segments (Study 2). Approximately half of those audience segments require the click-through rate to double compared to an untargeted campaign, which is unrealistically high for most ad campaigns. Our model also shows that narrow segments require a lift that is likely not attainable, specifically when the data quality of these segments is poor. We confirm this theoretical finding in an empirical study (Study 3): A decrease in data quality due to Apple’s introduction of the App Tracking Transparency (ATT) framework more negatively affects the click-through rate of narrow (versus broad) audience segments.
In this article, we examine anti-refugee hate crime in the wake of the large influx of refugees to Germany in 2014 and 2015. By exploiting institutional features of the assignment of refugees to German regions, we estimate the impact of unexpected and sudden large-scale immigration on hate crime against refugees. Results indicate that it is not simply the size of local refugee inflows which drives the increase in hate crime, but rather the combination of refugee arrivals and latent anti-refugee sentiment. We show that ethnically homogeneous areas, areas which experienced hate crimes in the 1990s, and areas with high support for the Nazi party in the Weimar Republic, are more prone to respond to the arrival of refugees with incidents of hate crime against this group. Our results highlight the importance of regional anti-immigration sentiment in the analysis of the incumbent population’s reaction to immigration.
A novel spatial autoregressive model for panel data is introduced, which incor-porates multilayer networks and accounts for time-varying relationships. Moreover, the proposed approach allows the structural variance to evolve smoothly over time and enables the analysis of shock propagation in terms of time-varying spillover effects.
The framework is applied to analyse the dynamics of international relationships among the G7 economies and their impact on stock market returns and volatilities. The findings underscore the substantial impact of cooperative interactions and highlight discernible disparities in network exposure across G7 nations, along with nuanced patterns in direct and indirect spillover effects.
In his speech at the conference „The SNB and its Watchers“, Otmar Issing, member of the ECB Governing Council from its start in 1998 until 2006, takes a look back at more than twenty years of the conference series „The ECB and Its Watchers“. In June 1999, Issing established this format together with Axel Weber, then Director of the Center for Financial Studies, to discuss the monetary policy strategy of the newly founded central bank with a broad circle of participants, that is academics, bank economists and members of the media on a „neutral ground“. At the annual conference, the ECB and its representatives would play an active role and engage in a lively exchange of view with the other participants. Over the years, Volker Wieland took over as organizer of the conference series, which also was adopted by other central banks. In his contribution at the second conference „The SNB and its Watchers“, Issing summarizes the experience gained from over twenty years of the ECB Watchers Conference.
Investors' return expectations are pivotal in stock markets, but the reasoning behind these expectations remains a black box for economists. This paper sheds light on economic agents' mental models -- their subjective understanding -- of the stock market, drawing on surveys with the US general population, US retail investors, US financial professionals, and academic experts. Respondents make return forecasts in scenarios describing stale news about the future earnings streams of companies, and we collect rich data on respondents' reasoning. We document three main results. First, inference from stale news is rare among academic experts but common among households and financial professionals, who believe that stale good news lead to persistently higher expected returns in the future. Second, while experts refer to the notion of market efficiency to explain their forecasts, households and financial professionals reveal a neglect of equilibrium forces. They naively equate higher future earnings with higher future returns, neglecting the offsetting effect of endogenous price adjustments. Third, a series of experimental interventions demonstrate that these naive forecasts do not result from inattention to trading or price responses but reflect a gap in respondents' mental models -- a fundamental unfamiliarity with the concept of equilibrium.
Shallow meritocracy
(2023)
Meritocracies aspire to reward hard work and promise not to judge individuals by the circumstances into which they were born. However, circumstances often shape the choice to work hard. I show that people's merit judgments are "shallow" and insensitive to this effect. They hold others responsible for their choices, even if these choices have been shaped by unequal circumstances. In an experiment, US participants judge how much money workers deserve for the effort they exert. Unequal circumstances disadvantage some workers and discourage them from working hard. Nonetheless, participants reward the effort of disadvantaged and advantaged workers identically, regardless of the circumstances under which choices are made. For some participants, this reflects their fundamental view regarding fair rewards. For others, the neglect results from the uncertain counterfactual. They understand that circumstances shape choices but do not correct for this because the counterfactual—what would have happened under equal circumstances—remains uncertain.
We estimate the causal effect of shared e-scooter services on traffic accidents by exploiting the variation in the availability of e-scooter services induced by the staggered rollout across 93 cities in six countries. Police-reported accidents involving personal injuries in the average month increased by around 8.2% after shared e-scooters were introduced. Effects are large during summer and insignificant during winter. Further heterogeneity analysis reveals the largest estimated effects for cities with limited cycling infrastructure, while no effects are detectable in cities with high bike-lane density. This difference suggests that public policy can play a crucial role in mitigating accidents related to e-scooters and, more generally, to changes in urban mobility.
This paper proposes tests for out-of-sample comparisons of interval forecasts based on parametric conditional quantile models. The tests rank the distance between actual and nominal conditional coverage with respect to the set of conditioning variables from all models, for a given loss function. We propose a pairwise test to compare two models for a single predictive interval. The set-up is then extended to a comparison across multiple models and/or intervals. The limiting distribution varies depending on whether models are strictly non-nested or overlapping. In the latter case, degeneracy may occur. We establish the asymptotic validity of wild bootstrap based critical values across all cases. An empirical application to Growth-at-Risk (GaR) uncovers situations in which a richer set of financial indicators are found to outperform a commonly-used benchmark model when predicting downside risk to economic activity.
This paper studies the macro-financial implications of using carbon prices to achieve ambitious greenhouse gas (GHG) emission reduction targets. My empirical evidence shows a 0.6% output loss and a rise of 0.3% in inflation in response to a 1% shock on carbon policy. Furthermore, I also observe financial instability and allocation effects between the clean and highly polluted energy sectors. To have a better prediction of medium and long-term impact, using a medium-large macro-financial DSGE model with environmental aspects, I show the recessionary effect of an ambitious carbon price implementation to achieve climate targets, a 40% reduction in GHG emission causes a 0.7% output loss while reaching a zero-emission economy in 30 years causes a 2.6% output loss. I document an amplified effect of the banking sector during the transition path. The paper also uncovers the beneficial role of pre-announcements of carbon policies in mitigating inflation volatility by 0.2% at its peak, and our results suggest well-communicated carbon policies from authorities and investing to expand the green sector. My findings also stress the use of optimal green monetary and financial policies in mitigating the effects of transition risk and assisting the transition to a zero-emission world. Utilizing a heterogeneous approach with macroprudential tools, I find that optimal macroprudential tools can mitigate the output loss by 0.1% and investment loss by 1%. Importantly, my work highlights the use of capital flow management in the green transition when a global cooperative solution is challenging.
While the COVID-19 pandemic had a large and asymmetric impact on firms, many countries quickly enacted massive business rescue programs which are specifically targeted to smaller firms. Little is known about the effects of such policies on business entry and exit, investment, factor reallocation, and macroeconomic outcomes. This paper builds a general equilibrium model with heterogeneous and financially constrained firms in order to evaluate the short- and long-term consequences of small firm rescue programs in a pandemic recession. We calibrate the stationary equilibrium and the pandemic shock to the U.S. economy, taking into account the factual Paycheck Protection Program (PPP) as a specific policy. We find that the policy has only a modest impact on aggregate output and employment because (i) jobs are saved predominately in the smallest firms that account for a minor share of employment and (ii) the grant reduces the reallocation of resources towards larger and less impacted firms. Much of the reallocation effects occur in the aftermath of the pandemic episode. By preventing inefficient liquidations, the policy dampens the long-term declines of aggregate consumption and of the real wage, thus delivering small welfare gains.
Goal setting is vital in learning sciences, but the scientific evaluation of optimal learning goals is underexplored. This study proposes a novel methodological approach to determine optimal learning goals. The data in this study comes from a gamified learning app implemented in an undergraduate accounting course at a large German university. With a combination of decision trees and regression analyses, the goals connected to the badges implemented in the app are evaluated. The results show that the initial badge set already motivated learning strategies that led to better grades on the exam. However, the results indicate that the levels of the goals could be improved, and additional badges could be implemented. In addition to new goal levels, new goal types are also discussed. The findings show that learning goals initially determined by the instructors need to be evaluated to offer an optimal motivational effect. The new methodological approach used in this study can be easily transferred to other learning data sets to provide further insights.
Do required minimum distribution 401(k) rules matter, and for whom? Insights from a lifecycle model
(2023)
Tax-qualified vehicles have helped U.S. private-sector workers accumulate $33Tr in retirement plans. An often-overlooked important institutional feature shaping decumulations from these plans is the “Required Minimum Distribution” (RMD) regulation requiring retirees to withdraw a minimum fraction from their retirement accounts or pay excise taxes on withdrawal shortfalls. Our calibrated lifecycle model measures the impact of RMD rules on heterogeneous households’ financial behavior during their work lives and in retirement. The model shows that reforms delaying or eliminating the RMD rules have little effect on consumption profiles, but they would influence withdrawals and tax payments for households with bequest motives.
Measuring and reducing energy consumption constitutes a crucial concern in public policies aimed at mitigating global warming. The real estate sector faces the challenge of enhancing building efficiency, where insights from experts play a pivotal role in the evaluation process. This research employs a machine learning approach to analyze expert opinions, seeking to extract the key determinants influencing potential residential building efficiency and establishing an efficient prediction framework. The study leverages open Energy Performance Certificate databases from two countries with distinct latitudes, namely the UK and Italy, to investigate whether enhancing energy efficiency necessitates different intervention approaches. The findings reveal the existence of non-linear relationships between efficiency and building characteristics, which cannot be captured by conventional linear modeling frameworks. By offering insights into the determinants of residential building efficiency, this study provides guidance to policymakers and stakeholders in formulating effective and sustainable strategies for energy efficiency improvement.
The forward guidance trap
(2023)
This paper examines the policy experience of the Fed, ECB and BOJ during and after the Covid-19 pandemic and draws lessons for monetary policy strategy and ist communication. All three central banks provided appropriate accommodation during the pandemic but two failed to unwind this accommodation in a timely manner. The Fed and ECB guided real interest rates to inappropriately negative levels as the economy recovered from the pandemic, fueling high inflation. The policy error can be traced to decisions regarding forward guidance on policy rates that delayed lift-off while the two central banks continued to expand their balance sheets. The Fed and the ECB fell into the forward guidance trap. This could have been avoided if policy were guided by a forward- looking rule that properly adjusted the nominal interest rate with the evolution of the inflation outlook.
A safe core mandate
(2023)
Central banks have vastly expanded their footprint on capital markets. At a time of extraordinary pressure by many sides, a simple benchmark for the scale and scope of their core mandate of price and financial stability may be useful.
We make a case for a narrow mandate to maintain and safeguard the border between safe and quasi safe assets. This ex-ante definition minimizes ambiguity and discourages risk creation and limit panic runs, primarily by separating market demand for reliable liquidity from risk-intolerant, price-insensitive demand for a safe store of value. The central bank may be occasionally forced to intervene beyond the safe core but should not be bound by any such ex-ante mandate, unless directed to specific goals set by legislation with explicit fiscal support.
We review distinct features of liquidity and safety demand, seeking a definition of the safety border, and discuss LOLR support for borderline safe assets such as MMF or uninsured deposits.
A safe core formulation is close to the historical focus on regulated entities, collateralized lending and attention to the public debt market, but its specific framing offers some context on controversial issues such as the extent of LOLR responsibilities. It also justifies a persistently large scale for central bank liabilities (Greenwood, Hansom and Stein 2016), as safety demand is related to financial wealth rather than GDP. Finally, it is consistent with an active central bank role in supporting liquidity in government debt markets trading and clearing (Duffie 2020, 2021).
A key solution for public good provision is the voluntary formation of institutions that commit players to cooperate. Such institutions generate inequality if some players decide not to participate but cannot be excluded from cooperation benefits. Prior research with small groups emphasizes the role of fairness concerns with positive effects on cooperation. We show that effects do not generalize to larger groups: if group size increases, groups are less willing to form institutions generating inequality. In contrast to smaller groups, however, this does not increase the number of participating players, thereby limiting the positive impact of institution formation on cooperation.
This Policy Letter presents two event studies based on the pre-war data that foreshadows the remarkable way in which Russian economy was able to withstand the pressure from unprecedented package of international sanctions. First, it shows that a sudden stop of one of the two domestic producers of zinc in 2018 did not lead to a slowdown in the steel industry, which heavily relied on this input. Second, it demonstrates that a huge increase in cost of fuel called mazut in 2020 had virtually no impact on firms that used it, even in the regions where it was hard to substitute it for alternative fuels. This Policy Letter argues that such stability in production can be explained by the fact that Russian economy is heavily oriented toward commodities. It is much easier to replace a commodity supplier than a supplier of manufacturing goods, and many commodity producers operate at high profit margins that allow them to continue to operate even after big increases in their costs. Thus, sanctions had a much smaller impact on Russia than they would have on an economy with larger manufacturing sector, where inputs are less substitutable and profit margins are smaller.
We study the interplay of capital and liquidity regulation in a general equilibrium setting by focusing on future funding risks. The model consists of a banking sector with long-term illiquid investment opportunities that need to be financed by shortterm debt and by issuing equity. Reliance on refinancing long-term investment in the middle of the life-time is risky, since the next generation of potential short-term debt holders may not be willing to provide funding when the return prospects on the long-term investment turn out to be bad. For moderate return risk, equilibria with and without bank default coexist, and bank default is a self-fulfilling prophecy. Capital and liquidity regulation can prevent bank default and may implement the first-best. Yet the former is more powerful in ruling out undesirable equilibria and thus dominates liquidity regulation. Adding liquidity regulation to optimal capital regulation is redundant.
In current discussions on large language models (LLMs) such as GPT, understanding their ability to emulate facets of human intelligence stands central. Using behavioral economic paradigms and structural models, we investigate GPT’s cooperativeness in human interactions and assess its rational goal-oriented behavior. We discover that GPT cooperates more than humans and has overly optimistic expectations about human cooperation. Intriguingly, additional analyses reveal that GPT’s behavior isn’t random; it displays a level of goal-oriented rationality surpassing human counterparts. Our findings suggest that GPT hyper-rationally aims to maximize social welfare, coupled with a strive of self-preservation. Methodologically, our esearch highlights how structural models, typically employed to decipher human behavior, can illuminate the rationality and goal-orientation of LLMs. This opens a compelling path for future research into the intricate rationality of sophisticated, yet enigmatic artificial agents.
We study the redistributive effects of inflation combining administrative bank data with an information provision experiment during an episode of historic inflation. On average, households are well-informed about prevailing inflation and are concerned about its impact on their wealth; yet, while many households know about inflation eroding nominal assets, most are unaware of nominal-debt erosion. Once they receive information on the debt-erosion channel, households update upwards their beliefs about nominal debt and their own real net wealth. These changes in beliefs causally affect actual consumption and hypothetical debt decisions. Our findings suggest that real wealth mediates the sensitivity of consumption to inflation once households are aware of the wealth effects of inflation.
Dynamics of life course family transitions in Germany: exploring patterns, process and relationships
(2023)
This paper explores dynamics of family life events in Germany using discrete time event history analysis based on SOEP data. We find that higher educational attainment, better income level, and marriage emerge as salient protective factors mitigating the risk of mortality; better education also reduces the likelihood of first marriage whereas, lower educational attainment, protracted period, and presence of children act as protective factors against divorce. Our key finding shows that disparity in mean life expectancies between individuals from low- and high-income brackets is observed to be 9 years among males and 6 years among females, thereby illustrating the mortality inequality attributed to income disparities. Our estimates show that West Germans have low risk of death, less likelihood of first marriage, and they have a high risk of divorce and remarriage compared to East Germans.
We present determinacy bounds on monetary policy in the sticky information model. We find that these bounds are more conservative here when the long run Phillips curve is vertical than in the standard Calvo sticky price New Keynesian model. Specifically, the Taylor principle is now necessary directly - no amount of output targeting can substitute for the monetary authority’s concern for inflation. These determinacy bounds are obtained by appealing to frequency domain techniques that themselves provide novel interpretations of the Phillips curve.