Refine
Year of publication
- 2023 (171) (remove)
Document Type
- Working Paper (103)
- Part of Periodical (46)
- Article (12)
- Contribution to a Periodical (5)
- Book (2)
- Doctoral Thesis (2)
- Preprint (1)
Has Fulltext
- yes (171)
Is part of the Bibliography
- no (171)
Keywords
- Regulation (7)
- Capital Markets Union (6)
- Sustainable Finance (5)
- Banking Regulation (4)
- Monetary Policy (4)
- DSGE (3)
- ESG (3)
- Germany (3)
- Inflation (3)
- Machine Learning (3)
Institute
- Wirtschaftswissenschaften (171) (remove)
Biodiversity loss poses a significant threat to the global economy and affects ecosystem services on which most large companies rely heavily. The severe financial implications of such a reduced species diversity have attracted the attention of companies and stakeholders, with numerous calls to increase corporate transparency. Using textual analysis, this study thus investigates the current state of voluntary biodiversity reporting of 359 European blue-chip companies and assesses the extent to which it aligns with the upcoming disclosure framework of the Task Force on Nature-related Financial Disclosures (TNFD). The descriptive results suggest a substantial gap between current reporting practices and the proposed TNFD framework, with disclosures largely lacking quantification, details and clear targets. In addition, the disclosures appear to be relatively unstandardized. Companies in sectors or regions exposed to higher nature-related risks as well as larger companies are more likely to report on aspects of biodiversity. This study contributes to the emerging literature on nature-related risks and provides detailed insights on the extent of the reporting gap in light of the upcoming standards.
Unconventional green
(2023)
We analyze the effects of the PEPP (Pandemic Emergency Purchase Programme), the temporary quantitative easing implemented by the ECB immediately after the burst of the Covid-19 pandemic. We show that the differences in aim, size and flexibility with respect to the traditional Corporate Sector Purchase Programme (CSPP) were able to significantly involve, in addition to the directly targeted bonds, also the green bond segment. Via a standard difference- in-differences model we estimate that the yield on green bonds declined by more than 20 basis points after the PEPP. In order to take into account also the differences attributable to the eligibility to the programme, we employ a triple difference estimator. Bonds that at the same time were green and eligible benefitted of an additional premium of 39 basis points.
ChatGPT, der Prototyp eines Chatbot, von dem amerikanischen Unternehmen OpenAI entwickelt, ist im Augenblick in aller Munde. Gefragt wird auch: Stellt diese Software eine Herausforderung für den Bildungsbereich dar, werden künftig damit Haus- und Abschlussarbeiten erstellt? Prof. Uwe Walz, Professor für VWL, insbesondere Industrieökonomie an der Goethe-Universität, hat den Chatbot bereits im laufenden Wintersemester mit Studierenden analysiert.
The European low-carbon transition began in the last few decades and is accelerating to achieve net-zero emissions by 2050. This paper examines how climate-related transition indicators of a large European corporate firm relate to its CDS-implied credit risk across various time horizons. Findings show that firms with higher GHG emissions have higher CDS spreads at all tenors, including the 30-year horizon, particularly after the 2015 Paris Agreement, and in prominent industries such as Electricity, Gas, and Mining. Results suggest that the European CDS market is currently pricing, to some extent, albeit small, the exposure to transition risk for a firm across different time horizons. However, it fails to account for a company’s efforts to manage transition risks and its exposure to the EU Emissions Trading Scheme. CDS market participants seem to find challenging to risk-differentiate ETS-participating firms from other firms.
Auszubildende sollen durch die Berufsausbildung u.a. die Kompetenz erlangen, berufliche Probleme zu lösen. Abschlussprüfungen dienen der Kompetenzerfassung, schriftlich-kaufmännische Prüfungsaufgaben bilden allerdings noch unzureichend Problemsituationen ab, deren Lösung Problemlösekompetenz erfordert. An der Erstellung von Prüfungsaufgaben sind auch Lehrkräfte kaufmännisch-beruflicher Schulen beteiligt. In der Arbeit wird untersucht, wie sie in der ersten und zweiten Phase der Lehrer*innenbildung auf das Erstellen problemhaltiger Aufgaben zu summativ-diagnostischen Zwecken vorbereitet werden. Hierfür werden Dokumentenanalysen zu beiden Phasen der Lehrer*innenbildung durchgeführt. Die Ergebnisse werden mittels einer Fragebogenstudie mit Studiengangsleiter*innen sowie Interviews mit Fachleiter*innen der Studienseminare gesichert. Um die Wahrnehmung angehender Lehrkräfte zu erfahren, werden Interviews mit Masterstudierenden der Wirtschaftspädagogik sowie Lehrkräfte im Vorbereitungsdienst (LiV) an kaufmännisch-beruflichen Schulen durchgeführt.
Durch die Vorstudien gelingt es, Optimierungsbedarfe in der Ausbildung von Lehrkräften kaufmännisch-beruflicher Schulen festzuhalten. Davon ausgehend wird ein Trainingskonzept begründet ausgewählt. Die Evaluation dessen erfolgt mittels einer quasi-experimentellen Studie mit Masterstudierende und LiV. Zur qualitativen Evaluation werden Interviews mit Teilnehmenden der Interventionsgruppe durchgeführt. Die Ergebnisse zeigen, dass die Teilnehmenden das Training als Intervention überwiegend positiv wahrnehmen und dieser, zumindest mit Blick auf das Erstellen von problemhaltigen Aufgaben, zu einem Lernzuwachs führt. Durch die bedarfsorientierte Intervention und dessen Evaluationsergebnisse wird ein Konzept vorgeschlagen, welches eine Lösung zur Deckung bestehender Optimierungsbedarfe bietet. Die Ergebnisse der Arbeit haben das Potential, langfristig einen Beitrag zur Verbesserung der Lehrer*innenbildung zu leisten und somit u.a. Assessmentaufgaben valider zu gestalten.
I have assessed changes in the monetary policy stance in the euro area since its inception by applying a Bayesian time-varying parameter framework in conjunction with the Hamiltonian Monte Carlo algorithm. I find that the estimated policy response has varied considerably over time. Most of the results suggest that the response weakened after the onset of the financial crisis and while quantitative measures were still in place, although there are also indications that the weakening of the response to the expected inflation gap may have been less pronounced. I also find that the policy response has become more forceful over the course of the recent sharp rise in inflation. Furthermore, it is essential to model the stochastic volatility relating to deviations from the policy rule as it materially influences the results.
Testing frequency and severity risk under various information regimes and implications in insurance
(2023)
We build on Peter et al. (2017) who examined the benefit of testing frequency risk under various information regimes. We first consider testing only severity risk, and whether the principle of indemnity, i.e. the usual contract term that excludes claims payments above the resulting insured loss, affects the insurance contracts offered and purchased. Under information regimes which are less restrictive (in terms of obtaining and using customer information), it is possible for the insurer to offer different contracts for tested and untested individuals. In the absence of the principle of indemnity, individuals will test their severity risk and a separating equilibrium ensues. With the principle of indemnity, given an actuarially fair pooled contract, individuals will not test for severity under less restrictive information regimes; a pooling equilibrium thus ensues. Under more restrictive information regimes, the insurer offers separating contracts. Individuals will test for severity and purchase appropriate contracts. We also consider testing for both frequency and severity risk. The results here are more varied. The highest gain in efficiency from testing results from one of the more restrictive information regimes. Generally under all information regimes, there is a greater gain in efficiency without the principle of indemnity than with the principle of indemnity.
In the euro area, monetary policy is conducted by a single central bank for 20 member countries. However, countries are heterogeneous in their economic development, including their inflation rates. This paper combines a New Keynesian model and a neural network to assess whether the European Central Bank (ECB) conducted monetary policy between 2002 and 2022 according to the weighted average of the inflation rates within the European Monetary Union (EMU) or reacted more strongly to the inflation rate developments of certain EMU countries.
The New Keynesian model first generates data which is used to train and evaluate several machine learning algorithms. They authors find that a neural network performs best out-of-sample. They use this algorithm to generally classify historical EMU data, and to determine the exact weight on the inflation rate of EMU members in each quarter of the past two decades. Their findings suggest disproportional emphasis of the ECB on the inflation rates of EMU members that exhibited high inflation rate volatility for the vast majority of the time frame considered (80%), with a median inflation weight of 67% on these countries. They show that these results stem from a tendency of the ECB to react more strongly to countries whose inflation rates exhibit greater deviations from their long-term trend.
Industry concentration and markups in the US have been rising over the last 3-4 decades. However, the causes remain largely unknown. This paper uses machine learning on regulatory documents to construct a novel dataset on compliance costs to examine the effect of regulations on market power. The dataset is comprehensive and consists of all significant regulations at the 6-digit NAICS level from 1970-2018. We find that regulatory costs have increased by $1 trillion during this period. We document that an increase in regulatory costs results in lower (higher) sales, employment, markups, and profitability for small (large) firms. Regulation driven increase in concentration is associated with lower elasticity of entry with respect to Tobin's Q, lower productivity and investment after the late 1990s. We estimate that increased regulations can explain 31-37% of the rise in market power. Finally, we uncover the political economy of rulemaking. While large firms are opposed to regulations in general, they push for the passage of regulations that have an adverse impact on small firms.
We find that high macroeconomic uncertainty is associated with greater accumulation of physical capital, despite a reduction in investment and valuations. To reconcile this puzzling evidence, we show that uncertainty predicts lower depreciation and utilization of existing capital, which dominates the investment slowdown. Motivated by these dynamics, we develop a quantitative production-based model in which firms implement precautionary savings through reducing utilization rather than raising invest-ment. Through this novel intensive-margin mechanism, uncertainty shocks command a quarter of the equity premium in general equilibrium, while flexibility in utilization adjustments helps explain uncertainty risk exposures in the cross-section of industry returns.
We study the redistributive effects of inflation combining administrative bank data with an information provision experiment during an episode of historic inflation. On average, households are well-informed about prevailing inflation and are concerned about its impact on their wealth; yet, while many households know about inflation eroding nominal assets, most are unaware of nominal-debt erosion. Once they receive information on the debt-erosion channel, households update upwards their beliefs about nominal debt and their own real net wealth. These changes in beliefs causally affect actual consumption and hypothetical debt decisions. Our findings suggest that real wealth mediates the sensitivity of consumption to inflation once households are aware of the wealth effects of inflation.
We analyze the repercussions of different kinds of uncertainty on cash demand, including uncertainty of the digital infrastructures, confidence crises of the financial system, natural disasters, political uncertainties, and inflationary crises. Based on a comprehensive literature survey, theoretical considerations and complemented by case studies, we derive a classification scheme how cash holdings typically evolve in each of these types of uncertainty by separating between demand for domestic and international cash as well as between transaction and store of value balances. Hereby, we focus on the stabilizing macroeconomic properties of cash and recommend guidelines for cash supply by central banks and the banking system. Finally, we exemplify our analysis with five case studies from the developing world, namely Venezuela, Zimbabwe, Afghanistan, Iraq, and Libya.
Es geht um Werbung, Betrug oder die Optimierung von Geschäftsmodellen: Verbraucherdaten sind ein kostbares Gut, das Kreditgeber und Versicherer genauso interessiert wie Händler und Kriminelle. Kai Rannenberg, Professor für Mobile Business & Multilateral Security an der Goethe-Universität, forscht zur Cybersicherheit. Dirk Frank hat mit dem Wirtschaftsinformatiker über Datenschutz, Hackerangriffe und das Auto als »Handy auf Rädern« gesprochen.
Mamma mia! Revealing hidden heterogeneity by PCA-biplot : MPC puzzle for Italy's elderly poor
(2023)
I investigate consumption patterns in Italy and use a PCA-biplot to discover a consumption puzzle for the elderly poor. Data from the third wave (2017) of the Eurosystem’s Household Finance and Consumption Survey (HFCS) indicate that Italian poor old-aged households boast lower levels of the marginal propensity to consume (MPC) than suggested by the dominant consumption models. A customized regression analysis exhibits group differences with richer peers to be only half as large as prescribed by a traditional linear regression model. This analysis has benefited from a visualization technique for high-dimensional matrices related to the unsupervised machine learning literature. I demonstrate that PCA-biplots are a useful tool to reveal hidden relations and to help researchers to formulate simple research questions. The method is presented in detail and suggestions on incorporating it in the econometric modeling pipeline are given.
Output gap revisions can be large even after many years. Real-time reliability tests might therefore be sensitive to the choice of the final output gap vintage that the real-time estimates are compared to. This is the case for the Federal Reserve’s output gap. When accounting for revisions in response to the global financial crisis in the final output gap, the improvement in real-time reliability since the mid-1990s is much smaller than found by Edge and Rudd (Review of Economics and Statistics, 2016, 98(4), 785-791). The negative bias of real-time estimates from the 1980s has disappeared, but the size of revisions continues to be as large as the output gap itself.
The authors systematically analyse how the realtime reliability assessment is affected through varying the final output gap vintage. They find that the largest changes are caused by output gap revisions after recessions. Economists revise their models in response to such events, leading to economically important revisions not only for the most recent years, but reaching back up to two decades. This might improve the understanding of past business cycle dynamics, but decreases the reliability of real-time output gaps ex post.
This cumulative dissertation contains four self-contained chapters on stochastic games and learning in intertemporal choice.
Chapter 1 presents an experiment on value learning in a setting where actions have both immediate and delayed consequences. Subjects make a series of choices between abstract options, with values that have to be learned by sampling. Each option is associated with two payoff components: One is revealed immediately after the choice, the other with one round delay. Objectively, both payoff components are equally important, but most subjects systematically underreact to the delayed consequences. The resulting behavior appears impatient or myopic. However, there is no inherent reason to discount: All rewards are paid simultaneously, after the experiment. Elicited beliefs on the value of options are in accordance with choice behavior. These results demonstrate that revealed impatience may arise from frictions in learning, and that discounting does not necessarily reflect deep time preferences. In a treatment variation, subjects first learn passively from the evidence generated by others, before then making a series of own choices. Here, the underweighting of delayed consequences is attenuated, in particular for the earliest own decisions. Active decision making thus seems to play an important role in the emergence of the observed bias.
Chapter 2 introduces and proves existence of Markov quantal response equilibrium (QRE), an application of QRE to finite discounted stochastic games. We then study a specific case, logit Markov QRE, which arises when players react to total discounted payoffs using the logit choice rule with precision parameter λ. We show that the set of logit Markov QRE always contains a smooth path that leads from the unique QRE at λ = 0 to a stationary equilibrium of the game as λ goes to infinity. Following this path allows to solve arbitrary finite discounted stochastic games numerically; an implementation of this algorithm is publicly available as part of the package sgamesolver. We further show that all logit Markov QRE are ε-equilibria, with a bound for ε that is independent of the payoff function of the game and decreases hyperbolically in λ. Finally, we establish a link to reinforcement learning, by characterizing logit Markov QRE as the stationary points of a game dynamic that arises when all players follow the well-established reinforcement learning algorithm expected SARSA.
Chapter 3 introduces the logarithmic stochastic tracing procedure, a homotopy method to compute stationary equilibria for finite and discounted stochastic games. We build on the linear stochastic tracing procedure (Herings and Peeters 2004), but introduce logarithmic penalty terms as a regularization device, which brings two major improvements. First, the scope of the method is extended: it now has a convergence guarantee for all games of this class, rather than just generic ones. Second, by ensuring a smooth and interior solution path, computational performance is increased significantly. A ready-to-use implementation is publicly available. As demonstrated here, its speed compares quite favorable to other available algorithms, and it allows to solve games of considerable size in reasonable times. Because the method involves the gradual transformation of a prior into equilibrium strategies, it is possible to search the prior space and uncover potentially multiple equilibria and their respective basins of attraction. This also connects the method to established theory of equilibrium selection.
Chapter 4 introduces sgamesolver, a python package that uses the homotopy method to compute stationary equilibria of finite discounted stochastic games. A short user guide is complemented with discussion of the homotopy method, the two implemented homotopy functions logit Markov QRE and logarithmic tracing, and the predictor-corrector procedure and its implementation in sgamesolver. Basic and advanced use cases are demonstrated using several example games. Finally, we discuss the topic of symmetries in stochastic games.
A safe core mandate
(2023)
Central banks have vastly expanded their footprint on capital markets. At a time of extraordinary pressure by many sides, a simple benchmark for the scale and scope of their core mandate of price and financial stability may be useful.
We make a case for a narrow mandate to maintain and safeguard the border between safe and quasi safe assets. This ex-ante definition minimizes ambiguity and discourages risk creation and limit panic runs, primarily by separating market demand for reliable liquidity from risk-intolerant, price-insensitive demand for a safe store of value. The central bank may be occasionally forced to intervene beyond the safe core but should not be bound by any such ex-ante mandate, unless directed to specific goals set by legislation with explicit fiscal support.
We review distinct features of liquidity and safety demand, seeking a definition of the safety border, and discuss LOLR support for borderline safe assets such as MMF or uninsured deposits.
A safe core formulation is close to the historical focus on regulated entities, collateralized lending and attention to the public debt market, but its specific framing offers some context on controversial issues such as the extent of LOLR responsibilities. It also justifies a persistently large scale for central bank liabilities (Greenwood, Hansom and Stein 2016), as safety demand is related to financial wealth rather than GDP. Finally, it is consistent with an active central bank role in supporting liquidity in government debt markets trading and clearing (Duffie 2020, 2021).
The forward guidance trap
(2023)
This paper examines the policy experience of the Fed, ECB and BOJ during and after the Covid-19 pandemic and draws lessons for monetary policy strategy and ist communication. All three central banks provided appropriate accommodation during the pandemic but two failed to unwind this accommodation in a timely manner. The Fed and ECB guided real interest rates to inappropriately negative levels as the economy recovered from the pandemic, fueling high inflation. The policy error can be traced to decisions regarding forward guidance on policy rates that delayed lift-off while the two central banks continued to expand their balance sheets. The Fed and the ECB fell into the forward guidance trap. This could have been avoided if policy were guided by a forward- looking rule that properly adjusted the nominal interest rate with the evolution of the inflation outlook.
This paper studies the impact of banks’ dividend restrictions on the behavior of their institutional investors. Using an identification strategy that relies on the within investor variation and a difference in difference setup, I find that funds permanently decrease their ownership shares at treated banks during the 2020 dividend restrictions in the Eurozone and even exit treated banks’ stocks. Using data before the intro- duction of the ban reveals a positive relationship between fund ownership and banks’ dividend yield, highlighting again the importance of dividends for European banks’ fund investors. This reaction also has pricing implications since there is a negative relationship between the dividend restriction announcement day cumulative abnormal returns and the percentage of fund owners per bank.
We propose a model with mean-variance foreign investors who exhibit a convex disutility associated to brown bond holdings. The model predicts that bond green premia should be smaller in economies with a closer financial account and highly volatile exchange rates. This happens because foreign intermediaries invest relatively less in such economies, and this lowers the marginal disutility of investing in polluting activities. We find strong empirical evidence in favor of this hypothesis using a global bond market dataset. Exchange rate volatility and financial account openness are thus able to explain the higher financing costs of green projects in emerging markets relative to advanced economies, especially when green bonds are denominated in local currency: a disadvantage that we can call the "green sin" of emerging economies.
In recent years, European regulators have debated restricting the time an online tracker can track a user to protect consumer privacy better. Despite the significance of these debates, there has been a noticeable absence of any comprehensive cost-benefit analysis. This article fills this gap on the cost side by suggesting an approach to estimate the economic consequences of lifetime restrictions on cookies for publishers. The empirical study on cookies of 54,127 users who received ∼128 million ad impressions over ∼2.5 years yields an average cookie lifetime of 279 days, with an average value of €2.52 per cookie. Only ∼13 % of all cookies increase their daily value over time, but their average value is about four times larger than the average value of all cookies. Restricting cookies’ lifetime to one year (two years) could potentially decrease their lifetime value by ∼25 % (∼19 %), which represents a potential decrease in the value of all cookies of ∼9 % (∼5%). Most cookies, however, would not be affected by lifetime restrictions of 12 or 24 months as 72 % (85 %) of the users delete their cookies within 12 (24) months. In light of the €10.60 billion cookie-based display ad revenue in Europe, such restrictions would endanger €904 million (€576 million) annually, equivalent to €2.08 (€1.33) per EU internet user. The article discusses these results' marketing strategy challenges and opportunities for advertisers and publishers.
We present determinacy bounds on monetary policy in the sticky information model. We find that these bounds are more conservative here when the long run Phillips curve is vertical than in the standard Calvo sticky price New Keynesian model. Specifically, the Taylor principle is now necessary directly - no amount of output targeting can substitute for the monetary authority’s concern for inflation. These determinacy bounds are obtained by appealing to frequency domain techniques that themselves provide novel interpretations of the Phillips curve.
This paper develops and implements a backward and forward error analysis of and condition numbers for the numerical stability of the solutions of linear dynamic stochastic general equilibrium (DSGE) models. Comparing seven different solution methods from the literature, I demonstrate an economically significant loss of accuracy specifically in standard, generalized Schur (or QZ) decomposition based solutions methods resulting from large backward errors in solving the associated matrix quadratic problem. This is illustrated in the monetary macro model of Smets and Wouters (2007) and two production-based asset pricing models, a simple model of external habits with a readily available symbolic solution and the model of Jermann (1998) that lacks such a symbolic solution - QZ-based numerical solutions miss the equity premium by up to several annualized percentage points for parameterizations that either match the chosen calibration targets or are nearby to the parameterization in the literature. While the numerical solution methods from the literature failed to give any indication of these potential errors, easily implementable backward-error metrics and condition numbers are shown to successfully warn of such potential inaccuracies. The analysis is then performed for a database of roughly 100 DSGE models from the literature and a large set of draws from the model of Smets and Wouters (2007). While economically relevant errors do not appear pervasive from these latter applications, accuracies that differ by several orders of magnitude persist.
This paper presents and compares Bernoulli iterative approaches for solving linear DSGE models. The methods are compared using nearly 100 different models from the Macroeconomic Model Data Base (MMB) and different parameterizations of the monetary policy rule in the medium-scale New Keynesian model of Smets and Wouters (2007) iteratively. I find that Bernoulli methods compare favorably in solving DSGE models to the QZ, providing similar accuracy as measured by the forward error of the solution at a comparable computation burden. The method can guarantee convergence to a particular, e.g., unique stable, solution and can be combined with other iterative methods, such as the Newton method, lending themselves especially to refining solutions.
Using a field study at a German brokerage, we investigate advised individual investors’ behavior and outcomes after self-selecting into a flat-fee scheme (percentage of portfolio value) for mutual funds. In a difference-in-differences setting, we compare 699 switchers to propensity-score-matched advisory clients who remained in the commission-based scheme. Switchers increase their portfolio values, improve portfolio diversification, and increase their portfolio performance. They also demand more financial advice and follow more advisor recommendations. We argue that switchers attribute a higher quality to the unchanged advisory services.
Can consumption-based mechanisms generate positive and time-varying real term premia as we see in the data? I show that only models with time-varying risk aversion or models with high consumption risk can independently produce these patterns. The latter explanation has not been analysed before with respect to real term premia, and it relies on a small group of investors exposed to high consumption risk. Additionally, it can give rise to a “consumption-based arbitrageur” story of term premia. In relation to preferences, I consider models with both time-separable and recursive utility functions. Specifically for recursive utility, I introduce a novel perturbation solution method in terms of the intertemporal elasticity of substitution. This approach has not been used before in such models, it is easy to implement, and it allows a wide range of values for the parameter of intertemporal elasticity of substitution.
Who should hold bail-inable debt and how can regulators police holding restrictions effectively?
(2023)
This paper analyses the demand-side prerequisites for the efficient application of the bail-in tool in bank resolution, scrutinises whether the European bank crisis management and deposit insurance (CMDI) framework is apt to establish them, and proposes amendments to remedy identified shortcomings.
The first applications of the new European CMDI framework, particularly in Italy, have shown that a bail-in of debt holders is especially problematic if they are households or other types of retail investors. Such debt holders may be unable to bear losses, and the social implications of bailing them in may create incentives for decision makers to refrain from involving them in bank resolution. In turn, however, if investors can expect resolution authorities (RAs) to behave inconsistently over time and bail-out bank capital and debt holders despite earlier vows to involve them in bank rescues, the pricing and monitoring incentives that the crisis management framework seeks to invigorate would vanish. As a result, market discipline would be suboptimal and moral hazard would persist. Therefore, the policy objectives of the CMDI framework will only be achieved if critical bail-in capital is not held by retail investors without sufficient loss-bearing capacity. Currently, neither the CMDI framework nor capital market regulation suffice to assure that this precondition is met. Therefore, some amendments are necessary. In particular, debt instruments that are most likely to absorb losses in resolution should have a high minimum denomination and banks should not be allowed to self-place such securities.
Dynamics of life course family transitions in Germany: exploring patterns, process and relationships
(2023)
This paper explores dynamics of family life events in Germany using discrete time event history analysis based on SOEP data. We find that higher educational attainment, better income level, and marriage emerge as salient protective factors mitigating the risk of mortality; better education also reduces the likelihood of first marriage whereas, lower educational attainment, protracted period, and presence of children act as protective factors against divorce. Our key finding shows that disparity in mean life expectancies between individuals from low- and high-income brackets is observed to be 9 years among males and 6 years among females, thereby illustrating the mortality inequality attributed to income disparities. Our estimates show that West Germans have low risk of death, less likelihood of first marriage, and they have a high risk of divorce and remarriage compared to East Germans.
Armstrong et al. (2022) review the empirical methods used in the accounting literature to draw causal inferences. They document a growing number of studies using quasi-experimental methods and provide a critical perspective on this trend as well as the use of these methods in the accounting literature. In this discussion, I complement their review by broadening the perspective. I argue for a design-based approach to accounting research that shifts attention from methods to the entire research design. I also discuss why studies that aim to draw causal inferences are important, how these studies fit into the scientific process, and why assessing the strength of the research design is important when evaluating studies and aggregating research findings.
This paper studies the macro-financial implications of using carbon prices to achieve ambitious greenhouse gas (GHG) emission reduction targets. My empirical evidence shows a 0.6% output loss and a rise of 0.3% in inflation in response to a 1% shock on carbon policy. Furthermore, I also observe financial instability and allocation effects between the clean and highly polluted energy sectors. To have a better prediction of medium and long-term impact, using a medium-large macro-financial DSGE model with environmental aspects, I show the recessionary effect of an ambitious carbon price implementation to achieve climate targets, a 40% reduction in GHG emission causes a 0.7% output loss while reaching a zero-emission economy in 30 years causes a 2.6% output loss. I document an amplified effect of the banking sector during the transition path. The paper also uncovers the beneficial role of pre-announcements of carbon policies in mitigating inflation volatility by 0.2% at its peak, and our results suggest well-communicated carbon policies from authorities and investing to expand the green sector. My findings also stress the use of optimal green monetary and financial policies in mitigating the effects of transition risk and assisting the transition to a zero-emission world. Utilizing a heterogeneous approach with macroprudential tools, I find that optimal macroprudential tools can mitigate the output loss by 0.1% and investment loss by 1%. Importantly, my work highlights the use of capital flow management in the green transition when a global cooperative solution is challenging.
Christine Laudenbach and Vincent Lindner: To promote financial education among children, young people, and adults in the long term, comprehensive information services must reach the entire population in Germany with the help of cooperation partners. Talking about finances can no longer be a taboo subject.
This literature survey explores the potential avenues for the design of a green auto asset-backed security by focusing on the European auto securitization market. In this context, we examine the entire value chain of the securitization process to understand the incentives and interests involved at various stages of the transaction. We review recent regulatory developments, feasibility concerns, and potential designs of a sustainable securitization framework. Our study suggests that a Green Auto ABS should be based on both a green use of proceeds and a green collateral-based methodology.
This paper investigates stock market reaction to greenwashing by analyzing a new channel whereby companies change their names to green-related ones (i.e., names that evoke green and sustainable sentiments) to persuade the public that their activities are green. The findings reveal a striking positive stock price reaction to the announcement of corporate name changes to green-related names only for companies not involved in green activities at the time of the announcement. However, over an extended period of time, companies unrelated to green activities experience substantial negative abnormal returns if they fail to align their operational focus with the new name after the change.
Goal setting is vital in learning sciences, but the scientific evaluation of optimal learning goals is underexplored. This study proposes a novel methodological approach to determine optimal learning goals. The data in this study comes from a gamified learning app implemented in an undergraduate accounting course at a large German university. With a combination of decision trees and regression analyses, the goals connected to the badges implemented in the app are evaluated. The results show that the initial badge set already motivated learning strategies that led to better grades on the exam. However, the results indicate that the levels of the goals could be improved, and additional badges could be implemented. In addition to new goal levels, new goal types are also discussed. The findings show that learning goals initially determined by the instructors need to be evaluated to offer an optimal motivational effect. The new methodological approach used in this study can be easily transferred to other learning data sets to provide further insights.
Der Beitrag führt in das sozialpsychologische Phänomen des Gruppendenkens ein. Kennzeichen und Gegenstrategien werden anhand von Zeugenaussagen vor dem Wirecard-Untersuchungsausschuss am Beispiel des Aufsichtsrats illustriert. Normative Implikationen de lege ferenda schließen sich an. Sie betreffen unabhängige Mitglieder (auch auf der Arbeitnehmerbank), Direktinformationsrechte im Unternehmen (unter Einschluss von Hinweisgebern) und den Investorendialog (auch mit Leerverkäufern).
Die Erklärung von Intelligenz fasziniert Menschen seit Jahrtausenden, scheint sich doch mit ihr die menschliche Singularität gegenüber Natur und Tier zu manifestieren. Zugleich betonen nicht nur philosophische Strömungen, sondern auch die Mathematik, die Neuro- und die Computerwissenschaften die Abhängigkeit menschlicher Intelligenz von mechanistischen Prozessen. Ob damit eine Verwandtschaft beider Formen der Informationsverarbeitung verbunden ist oder genau umgekehrt fundamentale Unterschiede bestehen, ist seit knapp hundert Jahren Gegenstand wissenschaftlicher Kontroversen. Fest steht allerdings, dass Maschinen jedenfalls in manchen Bereichen die menschliche Leistungsfähigkeit in Schnelligkeit und Präzision übertreffen können. Nähert man sich dieser Vorstellung, drängt sich die Frage auf, ob es sich empfiehlt, bestimmte Entscheidungen besser von Maschinen treffen, jedenfalls aber unterstützen zu lassen. Neben Ärzten, Rechtsanwälten und Börsenhändlern betrifft das auch Leitungsentscheidungen von Unternehmensführern.
Vor diesem Hintergrund wird im Folgenden ein Überblick über Formen künstlicher Intelligenz (KI) gegeben. Im Anschluss fokussiert der Beitrag auf die Rolle von KI im Kontext von Vorstandsentscheidungen. Dazu zählen allgemeine Sorgfaltspflichten, wenn über den Einsatz von KI im Unternehmen zu entscheiden ist. Geht es um die Unterstützung gerade von Vorstandsentscheidungen stellen sich zusätzlich Fragen der Kooperation von Mensch und Maschine, der Delegation des Kernbestands von Leitungsentscheidungen und der Einstandspflicht für KI.
Central clearing counterparties (CCPs) were established to mitigate default losses resulting from counterparty risk in derivatives markets. In a parsimonious model, we show that clearing benefits are distributed unevenly across market participants. Loss sharing rules determine who wins or loses from clearing. Current rules disproportionately benefit market participants with flat portfolios. Instead, those with directional portfolios are relatively worse off, consistent with their reluctance to voluntarily use central clearing. Alternative loss sharing rules can address cross-sectional disparities in clearing benefits. However, we show that CCPs may favor current rules to maximize fee income, with externalities on clearing participation.
Life insurance convexity
(2023)
Life insurers sell savings contracts with surrender options, which allow policyholders to prematurely receive guaranteed surrender values. These surrender options move toward the money when interest rates rise. Hence, higher interest rates raise surrender rates, as we document empirically by exploiting plausibly exogenous variation in monetary policy. Using a calibrated model, we then estimate that surrender options would force insurers to sell up to 2% of their investments during an enduring interest rate rise of 25 bps per year. We show that these fire sales are fueled by surrender value guarantees and insurers’ long-term investments.
This paper documents that the bond investments of insurance companies transmit shocks from insurance markets to the real economy. Liquidity windfalls from household insurance purchases increase insurers' demand for corporate bonds. Exploiting the fact that insurers persistently invest in a small subset of firms for identification, I show that these increases in bond demand raise bond prices and lower firms' funding costs. In response, firms issue more bonds, especially when their bond underwriters are well connected with investors. Firms use the proceeds to raise investment rather than equity payouts. The results emphasize the significant impact of investor demand on firms' financing and investment activities.
The discount control mechanisms that closed-end funds often choose to adopt before IPO are supposedly implemented to narrow the difference between share price and net asset value. We find evidence that non-discretionary discount control mechanisms such as mandatory continuation votes serve as costly signals of information to reveal higher fund quality to investors. Rents of the skill signaled through the announcement of such policies accrue to managers rather than investors as differences in skill are revealed through growing assets under management rather than risk- adjusted performance.
We examine whether the uncertainty related to environmental, social, and governance (ESG) regulation developments is reflected in asset prices. We proxy the sensitivity of firms to ESG regulation uncertainty by the disparity across the components of their ESG ratings. Firms with high ESG disparity have a higher option-implied cost of protection against downside tail risk. The impact of the misalignment across the different dimensions of the ESG score is distinct from that of ESG score level itself. Aggregate downside risk bears a negative price for firms with low ESG disparity.