Institutes
Refine
Year of publication
Document Type
- Article (60)
- Doctoral Thesis (14)
- Book (11)
- Working Paper (6)
- Contribution to a Periodical (4)
- Bachelor Thesis (3)
- Review (2)
- Preprint (1)
- Report (1)
Has Fulltext
- yes (102)
Is part of the Bibliography
- no (102)
Keywords
- Artificial intelligence (3)
- Machine learning (3)
- Retirement (3)
- Life insurance (2)
- Stock market (2)
- 401(k) plan (1)
- AI fairness (1)
- Accounting (1)
- Adoption (1)
- Advertisement disclosure (1)
Institute
- Wirtschaftswissenschaften (102)
- Präsidium (14)
- House of Finance (HoF) (5)
- Center for Financial Studies (CFS) (4)
- Sustainable Architecture for Finance in Europe (SAFE) (4)
- Institute for Monetary and Financial Stability (IMFS) (2)
- Biochemie, Chemie und Pharmazie (1)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (1)
- Gesellschaftswissenschaften (1)
- Informatik und Mathematik (1)
Does political conflict with another country influence domestic consumers' daily consumption choices? We exploit the volatile US-China relations in 2018 and 2019 to analyze whether US consumers reduce their visits to Chinese restaurants when bilateral relations deteriorate. We measure the degree of political conflict through negativity in media reports and rely on smartphone location data to measure daily visits to over 190,000 US restaurants. A deterioration in US-China relations induces a significant decline in visits not only to Chinese but also to other foreign ethnic restaurants, while visits to typical American restaurants increase. We identify consumers' age, race, and cultural openness to moderate the strength of this ethnocentric effect.
External linkages allow nascent ventures to access crucial resources during the process of new product development. Forming external linkages can substantially contribute to a venture’s performance. However, little is known about the paths of external linkage formation, as well as the circumstances that drive the choice to pursue one rather than another path. This gap deserves further investigation, because we do not know whether insights developed for incumbent firms also apply to nascent ventures: To address this gap, we explore a novel dataset of 370 venture creation processes. Using sequence analyses based on optimal matching techniques and cluster analyses, we reveal that nascent ventures pursue one of overall four distinct paths of linkage formation activities during new product development. Contrary to the findings of the strategy literature, we find that if nascent ventures engage in external linkages at all, they do not combine exploration- and exploitation-oriented linkages but form either exploration- or exploitation-oriented linkages. Additional regression analyses highlight the circumstances that lead nascent ventures to pursue one rather than the other pathways. Taken together, our analyses point out that resource scarcity constitutes an important factor shaping the linkage formation activities of nascent ventures. Accordingly, we show that nascent ventures tend not to optimize by adding complementary knowledge to the firm’s knowledge base but rather to extend the existing knowledge base—a strategy which we call bricolage.
Nations are imposing unprecedented measures at a large scale to contain the spread of the COVID-19 pandemic. While recent studies show that non-pharmaceutical intervention measures such as lockdowns may have mitigated the spread of COVID-19, those measures also lead to substantial economic and social costs, and might limit exposure to ultraviolet-B radiation (UVB). Emerging observational evidence indicates the protective role of UVB and vitamin D in reducing the severity and mortality of COVID-19 deaths. This observational study empirically outlines the protective roles of lockdown and UVB exposure as measured by the ultraviolet index (UVI). Specifically, we examine whether the severity of lockdown is associated with a reduction in the protective role of UVB exposure. We use a log-linear fixed-effects model on a panel dataset of secondary data of 155 countries from 22 January 2020 until 7 October 2020 (n = 29,327). We use the cumulative number of COVID-19 deaths as the dependent variable and isolate the mitigating influence of lockdown severity on the association between UVI and growth rates of COVID-19 deaths from time-constant country-specific and time-varying country-specific potentially confounding factors. After controlling for time-constant and time-varying factors, we find that a unit increase in UVI and lockdown severity are independently associated with − 0.85 percentage points (p.p) and − 4.7 p.p decline in COVID-19 deaths growth rate, indicating their respective protective roles. The change of UVI over time is typically large (e.g., on average, UVI in New York City increases up to 6 units between January until June), indicating that the protective role of UVI might be substantial. However, the widely utilized and least severe lockdown (governmental recommendation to not leave the house) is associated with the mitigation of the protective role of UVI by 81% (0.76 p.p), which indicates a downside risk associated with its widespread use. We find that lockdown severity and UVI are independently associated with a slowdown in the daily growth rates of cumulative COVID-19 deaths. However, we find evidence that an increase in lockdown severity is associated with significant mitigation in the protective role of UVI in reducing COVID-19 deaths. Our results suggest that lockdowns in conjunction with adequate exposure to UVB radiation might have even reduced the number of COVID-19 deaths more strongly than lockdowns alone. For example, we estimate that there would be 11% fewer deaths on average with sufficient UVB exposure during the period people were recommended not to leave their house. Therefore, our study outlines the importance of considering UVB exposure, especially while implementing lockdowns, and could inspire further clinical studies that may support policy decision-making in countries imposing such measures.
The hierarchical feature regression (HFR) is a novel graph-based regularized regression estimator, which mobilizes insights from the domains of machine learning and graph theory to estimate robust parameters for a linear regression. The estimator constructs a supervised feature graph that decomposes parameters along its edges, adjusting first for common variation and successively incorporating idiosyncratic patterns into the fitting process. The graph structure has the effect of shrinking parameters towards group targets, where the extent of shrinkage is governed by a hyperparameter, and group compositions as well as shrinkage targets are determined endogenously. The method offers rich resources for the visual exploration of the latent effect structure in the data, and demonstrates good predictive accuracy and versatility when compared to a panel of commonly used regularization techniques across a range of empirical and simulated regression tasks.
This cumulative dissertation contains four self-contained chapters on stochastic games and learning in intertemporal choice.
Chapter 1 presents an experiment on value learning in a setting where actions have both immediate and delayed consequences. Subjects make a series of choices between abstract options, with values that have to be learned by sampling. Each option is associated with two payoff components: One is revealed immediately after the choice, the other with one round delay. Objectively, both payoff components are equally important, but most subjects systematically underreact to the delayed consequences. The resulting behavior appears impatient or myopic. However, there is no inherent reason to discount: All rewards are paid simultaneously, after the experiment. Elicited beliefs on the value of options are in accordance with choice behavior. These results demonstrate that revealed impatience may arise from frictions in learning, and that discounting does not necessarily reflect deep time preferences. In a treatment variation, subjects first learn passively from the evidence generated by others, before then making a series of own choices. Here, the underweighting of delayed consequences is attenuated, in particular for the earliest own decisions. Active decision making thus seems to play an important role in the emergence of the observed bias.
Chapter 2 introduces and proves existence of Markov quantal response equilibrium (QRE), an application of QRE to finite discounted stochastic games. We then study a specific case, logit Markov QRE, which arises when players react to total discounted payoffs using the logit choice rule with precision parameter λ. We show that the set of logit Markov QRE always contains a smooth path that leads from the unique QRE at λ = 0 to a stationary equilibrium of the game as λ goes to infinity. Following this path allows to solve arbitrary finite discounted stochastic games numerically; an implementation of this algorithm is publicly available as part of the package sgamesolver. We further show that all logit Markov QRE are ε-equilibria, with a bound for ε that is independent of the payoff function of the game and decreases hyperbolically in λ. Finally, we establish a link to reinforcement learning, by characterizing logit Markov QRE as the stationary points of a game dynamic that arises when all players follow the well-established reinforcement learning algorithm expected SARSA.
Chapter 3 introduces the logarithmic stochastic tracing procedure, a homotopy method to compute stationary equilibria for finite and discounted stochastic games. We build on the linear stochastic tracing procedure (Herings and Peeters 2004), but introduce logarithmic penalty terms as a regularization device, which brings two major improvements. First, the scope of the method is extended: it now has a convergence guarantee for all games of this class, rather than just generic ones. Second, by ensuring a smooth and interior solution path, computational performance is increased significantly. A ready-to-use implementation is publicly available. As demonstrated here, its speed compares quite favorable to other available algorithms, and it allows to solve games of considerable size in reasonable times. Because the method involves the gradual transformation of a prior into equilibrium strategies, it is possible to search the prior space and uncover potentially multiple equilibria and their respective basins of attraction. This also connects the method to established theory of equilibrium selection.
Chapter 4 introduces sgamesolver, a python package that uses the homotopy method to compute stationary equilibria of finite discounted stochastic games. A short user guide is complemented with discussion of the homotopy method, the two implemented homotopy functions logit Markov QRE and logarithmic tracing, and the predictor-corrector procedure and its implementation in sgamesolver. Basic and advanced use cases are demonstrated using several example games. Finally, we discuss the topic of symmetries in stochastic games.
By computing a volatility index (CVX) from cryptocurrency option prices, we analyze this market’s expectation of future volatility. Our method addresses the challenging liquidity environment of this young asset class and allows us to extract stable market implied volatilities. Two alternative methods are considered to compute volatilities from granular intra-day cryptocurrency options data, which spans over the COVID-19 pandemic period. CVX data therefore capture ‘normal’ market dynamics as well as distress and recovery periods. The methods yield two cointegrated index series, where the corresponding error correction model can be used as an indicator for market implied tail-risk. Comparing our CVX to existing volatility benchmarks for traditional asset classes, such as VIX (equity) or GVX (gold), confirms that cryptocurrency volatility dynamics are often disconnected from traditional markets, yet, share common shocks.
This paper explores entrepreneurs’ initially intended exit strategies and compares them to their final exit paths using an inductive approach that builds on the grounded theory methodology. Our data shows that initially intended and final exit strategies differ among entrepreneurs. Two groups of entrepreneurs emerged from our data. The first group comprises entrepreneurs who financed their firms through equity investors. The second group is made up of entrepreneurs who financed their businesses solely with their own equities. Our data shows that the first group originally intended a financial harvest exit strategy and settled with this harvest exit strategy. The second group initially intended a stewardship exit strategy but did not succeed. We used the theory of planned behavior and the behavioral agency model to analyze our data. By examining our results from these two theoretical perspectives, our study explains how entrepreneurs’ exit intentions lead to their actual exit strategies.
This research examines the impact of online display advertising and paid search advertising relative to offline advertising on firm performance and firm value. Using proprietary data on annualized advertising expenditures for 1651 firms spanning seven years, we document that both display advertising and paid search advertising exhibit positive effects on firm performance (measured by sales) and firm value (measured by Tobin's q). Paid search advertising has a more positive effect on sales than offline advertising, consistent with paid search being closest to the actual purchase decision and having enhanced targeting abilities. Display advertising exhibits a relatively more positive effect on Tobin's q than offline advertising, consistent with its long-term effects. The findings suggest heterogeneous economic benefits across different types of advertising, with direct implications for managers in analyzing advertising effectiveness and external stakeholders in assessing firm performance.
Most event studies rely on cumulative abnormal returns, measured as percentage changes in stock prices, as their dependent variable. Stock price reflects the value of the operating business plus non-operating assets minus debt. Yet, many events, in particular in marketing, only influence the value of the operating business, but not non-operating assets and debt. For these cases, the authors argue that the cumulative abnormal return on the operating business, defined as the ratio between the cumulative abnormal return on stock price and the firm-specific leverage effect, is a more appropriate dependent variable. Ignoring the differences in firm-specific leverage effects inflates the impact of observations pertaining to firms with large debt and deflates those pertaining to firms with large non-operating assets. Observations of firms with high debt receive several times the weight attributed to firms with low debt. A simulation study and the reanalysis of three previously published marketing event studies shows that ignoring the firm-specific leverage effects influences an event study's results in unpredictable ways.
Knowledge of consumers' willingness to pay (WTP) is a prerequisite to profitable price-setting. To gauge consumers' WTP, practitioners often rely on a direct single question approach in which consumers are asked to explicitly state their WTP for a product. Despite its popularity among practitioners, this approach has been found to suffer from hypothetical bias. In this paper, we propose a rigorous method that improves the accuracy of the direct single question approach. Specifically, we systematically assess the hypothetical biases associated with the direct single question approach and explore ways to de-bias it. Our results show that by using the de-biasing procedures we propose, we can generate a de-biased direct single question approach that is accurate enough to be useful for managerial decision-making. We validate this approach with two studies in this paper.
In recent years, European regulators have debated restricting the time an online tracker can track a user to protect consumer privacy better. Despite the significance of these debates, there has been a noticeable absence of any comprehensive cost-benefit analysis. This article fills this gap on the cost side by suggesting an approach to estimate the economic consequences of lifetime restrictions on cookies for publishers. The empirical study on cookies of 54,127 users who received ∼128 million ad impressions over ∼2.5 years yields an average cookie lifetime of 279 days, with an average value of €2.52 per cookie. Only ∼13 % of all cookies increase their daily value over time, but their average value is about four times larger than the average value of all cookies. Restricting cookies’ lifetime to one year (two years) could potentially decrease their lifetime value by ∼25 % (∼19 %), which represents a potential decrease in the value of all cookies of ∼9 % (∼5%). Most cookies, however, would not be affected by lifetime restrictions of 12 or 24 months as 72 % (85 %) of the users delete their cookies within 12 (24) months. In light of the €10.60 billion cookie-based display ad revenue in Europe, such restrictions would endanger €904 million (€576 million) annually, equivalent to €2.08 (€1.33) per EU internet user. The article discusses these results' marketing strategy challenges and opportunities for advertisers and publishers.
Even as online advertising continues to grow, a central question remains: Who to target? Yet, advertisers know little about how to select from the hundreds of audience segments for targeting (and combinations thereof) for a profitable online advertising campaign. Utilizing insights from a field experiment on Facebook (Study 1), we develop a model that helps advertisers solve the cold-start problem of selecting audience segments for targeting. Our model enables advertisers to calculate the break-even performance of an audience segment to make a targeted ad campaign at least as profitable as an untargeted one. Advertisers can use this novel model to decide whether to test specific audience segments in their campaigns (e.g., in randomized controlled trials). We apply our model to data from the Spotify ad platform to study the profitability of different audience segments (Study 2). Approximately half of those audience segments require the click-through rate to double compared to an untargeted campaign, which is unrealistically high for most ad campaigns. Our model also shows that narrow segments require a lift that is likely not attainable, specifically when the data quality of these segments is poor. We confirm this theoretical finding in an empirical study (Study 3): A decrease in data quality due to Apple’s introduction of the App Tracking Transparency (ATT) framework more negatively affects the click-through rate of narrow (versus broad) audience segments.
Small businesses face major challenges to becoming more innovative. These challenges are particularly prevalent in emerging economies where high uncertainties are a barrier to innovation. We know from previous studies that linkages to universities, on the one hand, and public procurement, on the other, support large and innovative firms in their efforts to become more innovative. However, we do not know whether these positive effects also hold true for small businesses. In this paper, we focus on how policy strategies reducing information, market and financial uncertainties shape small businesses’ innovation in China. Based on a sample of 926 small businesses derived from the World Bank Enterprises Survey in China (2012), we find that university-industry linkages enhance innovation, though only when it comes to minor forms of innovation. In line with the resource-based view of the firm, this effect is stronger for small businesses with higher capabilities. Moreover, we show that bidding for or delivering contracts to public sector clients has a positive effect on innovation, and in particular of major forms of innovation. In the bidding selection process, private firms and firms with higher capabilities are selected. Our findings show that both policy strategies have enhanced innovation, though with different effects on the degree of novelty. We attribute this finding to the different degrees of uncertainties they address.
Auszubildende sollen durch die Berufsausbildung u.a. die Kompetenz erlangen, berufliche Probleme zu lösen. Abschlussprüfungen dienen der Kompetenzerfassung, schriftlich-kaufmännische Prüfungsaufgaben bilden allerdings noch unzureichend Problemsituationen ab, deren Lösung Problemlösekompetenz erfordert. An der Erstellung von Prüfungsaufgaben sind auch Lehrkräfte kaufmännisch-beruflicher Schulen beteiligt. In der Arbeit wird untersucht, wie sie in der ersten und zweiten Phase der Lehrer*innenbildung auf das Erstellen problemhaltiger Aufgaben zu summativ-diagnostischen Zwecken vorbereitet werden. Hierfür werden Dokumentenanalysen zu beiden Phasen der Lehrer*innenbildung durchgeführt. Die Ergebnisse werden mittels einer Fragebogenstudie mit Studiengangsleiter*innen sowie Interviews mit Fachleiter*innen der Studienseminare gesichert. Um die Wahrnehmung angehender Lehrkräfte zu erfahren, werden Interviews mit Masterstudierenden der Wirtschaftspädagogik sowie Lehrkräfte im Vorbereitungsdienst (LiV) an kaufmännisch-beruflichen Schulen durchgeführt.
Durch die Vorstudien gelingt es, Optimierungsbedarfe in der Ausbildung von Lehrkräften kaufmännisch-beruflicher Schulen festzuhalten. Davon ausgehend wird ein Trainingskonzept begründet ausgewählt. Die Evaluation dessen erfolgt mittels einer quasi-experimentellen Studie mit Masterstudierende und LiV. Zur qualitativen Evaluation werden Interviews mit Teilnehmenden der Interventionsgruppe durchgeführt. Die Ergebnisse zeigen, dass die Teilnehmenden das Training als Intervention überwiegend positiv wahrnehmen und dieser, zumindest mit Blick auf das Erstellen von problemhaltigen Aufgaben, zu einem Lernzuwachs führt. Durch die bedarfsorientierte Intervention und dessen Evaluationsergebnisse wird ein Konzept vorgeschlagen, welches eine Lösung zur Deckung bestehender Optimierungsbedarfe bietet. Die Ergebnisse der Arbeit haben das Potential, langfristig einen Beitrag zur Verbesserung der Lehrer*innenbildung zu leisten und somit u.a. Assessmentaufgaben valider zu gestalten.
Vulnerability comes, according to Orio Giarini, with two risks: human-made risks, also called entrepreneurial risks, and natural or pure risks such as accidents and earthquakes. Both types of risk are growing in dimension and are increasingly interrelated. To control the vulnerability, sophisticated insurance products are called for. Here, mutual insurance is relevant, in particular when risks are large, probabilities uncertain or unknown, and events interrelated or correlated. In this paper the following three examples are discussed and the advantages of mutual insurance are shown: unknown probabilities connected with unforeseeable events, correlated risks and macroeconomic or demographic risks.