Refine
Year of publication
Document Type
- Article (209) (remove)
Has Fulltext
- yes (209)
Is part of the Bibliography
- no (209)
Keywords
- Machine learning (4)
- Retirement (4)
- Artificial intelligence (3)
- COVID-19 (3)
- Household finance (3)
- Ordoliberalism (3)
- Walter Eucken (3)
- machine learning (3)
- 401(k) plan (2)
- Aesthetics (2)
Institute
- Wirtschaftswissenschaften (209) (remove)
This research examines the impact of online display advertising and paid search advertising relative to offline advertising on firm performance and firm value. Using proprietary data on annualized advertising expenditures for 1651 firms spanning seven years, we document that both display advertising and paid search advertising exhibit positive effects on firm performance (measured by sales) and firm value (measured by Tobin's q). Paid search advertising has a more positive effect on sales than offline advertising, consistent with paid search being closest to the actual purchase decision and having enhanced targeting abilities. Display advertising exhibits a relatively more positive effect on Tobin's q than offline advertising, consistent with its long-term effects. The findings suggest heterogeneous economic benefits across different types of advertising, with direct implications for managers in analyzing advertising effectiveness and external stakeholders in assessing firm performance.
Most event studies rely on cumulative abnormal returns, measured as percentage changes in stock prices, as their dependent variable. Stock price reflects the value of the operating business plus non-operating assets minus debt. Yet, many events, in particular in marketing, only influence the value of the operating business, but not non-operating assets and debt. For these cases, the authors argue that the cumulative abnormal return on the operating business, defined as the ratio between the cumulative abnormal return on stock price and the firm-specific leverage effect, is a more appropriate dependent variable. Ignoring the differences in firm-specific leverage effects inflates the impact of observations pertaining to firms with large debt and deflates those pertaining to firms with large non-operating assets. Observations of firms with high debt receive several times the weight attributed to firms with low debt. A simulation study and the reanalysis of three previously published marketing event studies shows that ignoring the firm-specific leverage effects influences an event study's results in unpredictable ways.
This article uses information from two data sources, Compustat and Nexis Uni, and textual analysis to measure and validate the brand focus and customer focus of 109 U.S. listed retailers. The results from an analysis of their 853 earnings calls in 2010 and 2018 outline that on average, both foci increased over time. Although both foci vary substantially, brand focus varies more widely across retailers than their customer focus. Both foci are independent of each other. Specialty retailers have the highest brand focus, and internet & direct marketing retailers have the highest customer focus. A positive correlation exists between a retailer’s customer focus and its profitability, but not between a retailer’s brand focus and its profitability. The authors use the results to generate a research agenda that can direct future research in further systematically exploring firms’ brand and customer focus.
Knowledge of consumers' willingness to pay (WTP) is a prerequisite to profitable price-setting. To gauge consumers' WTP, practitioners often rely on a direct single question approach in which consumers are asked to explicitly state their WTP for a product. Despite its popularity among practitioners, this approach has been found to suffer from hypothetical bias. In this paper, we propose a rigorous method that improves the accuracy of the direct single question approach. Specifically, we systematically assess the hypothetical biases associated with the direct single question approach and explore ways to de-bias it. Our results show that by using the de-biasing procedures we propose, we can generate a de-biased direct single question approach that is accurate enough to be useful for managerial decision-making. We validate this approach with two studies in this paper.
In recent years, European regulators have debated restricting the time an online tracker can track a user to protect consumer privacy better. Despite the significance of these debates, there has been a noticeable absence of any comprehensive cost-benefit analysis. This article fills this gap on the cost side by suggesting an approach to estimate the economic consequences of lifetime restrictions on cookies for publishers. The empirical study on cookies of 54,127 users who received ∼128 million ad impressions over ∼2.5 years yields an average cookie lifetime of 279 days, with an average value of €2.52 per cookie. Only ∼13 % of all cookies increase their daily value over time, but their average value is about four times larger than the average value of all cookies. Restricting cookies’ lifetime to one year (two years) could potentially decrease their lifetime value by ∼25 % (∼19 %), which represents a potential decrease in the value of all cookies of ∼9 % (∼5%). Most cookies, however, would not be affected by lifetime restrictions of 12 or 24 months as 72 % (85 %) of the users delete their cookies within 12 (24) months. In light of the €10.60 billion cookie-based display ad revenue in Europe, such restrictions would endanger €904 million (€576 million) annually, equivalent to €2.08 (€1.33) per EU internet user. The article discusses these results' marketing strategy challenges and opportunities for advertisers and publishers.
Even as online advertising continues to grow, a central question remains: Who to target? Yet, advertisers know little about how to select from the hundreds of audience segments for targeting (and combinations thereof) for a profitable online advertising campaign. Utilizing insights from a field experiment on Facebook (Study 1), we develop a model that helps advertisers solve the cold-start problem of selecting audience segments for targeting. Our model enables advertisers to calculate the break-even performance of an audience segment to make a targeted ad campaign at least as profitable as an untargeted one. Advertisers can use this novel model to decide whether to test specific audience segments in their campaigns (e.g., in randomized controlled trials). We apply our model to data from the Spotify ad platform to study the profitability of different audience segments (Study 2). Approximately half of those audience segments require the click-through rate to double compared to an untargeted campaign, which is unrealistically high for most ad campaigns. Our model also shows that narrow segments require a lift that is likely not attainable, specifically when the data quality of these segments is poor. We confirm this theoretical finding in an empirical study (Study 3): A decrease in data quality due to Apple’s introduction of the App Tracking Transparency (ATT) framework more negatively affects the click-through rate of narrow (versus broad) audience segments.
Small businesses face major challenges to becoming more innovative. These challenges are particularly prevalent in emerging economies where high uncertainties are a barrier to innovation. We know from previous studies that linkages to universities, on the one hand, and public procurement, on the other, support large and innovative firms in their efforts to become more innovative. However, we do not know whether these positive effects also hold true for small businesses. In this paper, we focus on how policy strategies reducing information, market and financial uncertainties shape small businesses’ innovation in China. Based on a sample of 926 small businesses derived from the World Bank Enterprises Survey in China (2012), we find that university-industry linkages enhance innovation, though only when it comes to minor forms of innovation. In line with the resource-based view of the firm, this effect is stronger for small businesses with higher capabilities. Moreover, we show that bidding for or delivering contracts to public sector clients has a positive effect on innovation, and in particular of major forms of innovation. In the bidding selection process, private firms and firms with higher capabilities are selected. Our findings show that both policy strategies have enhanced innovation, though with different effects on the degree of novelty. We attribute this finding to the different degrees of uncertainties they address.
In this article, we examine anti-refugee hate crime in the wake of the large influx of refugees to Germany in 2014 and 2015. By exploiting institutional features of the assignment of refugees to German regions, we estimate the impact of unexpected and sudden large-scale immigration on hate crime against refugees. Results indicate that it is not simply the size of local refugee inflows which drives the increase in hate crime, but rather the combination of refugee arrivals and latent anti-refugee sentiment. We show that ethnically homogeneous areas, areas which experienced hate crimes in the 1990s, and areas with high support for the Nazi party in the Weimar Republic, are more prone to respond to the arrival of refugees with incidents of hate crime against this group. Our results highlight the importance of regional anti-immigration sentiment in the analysis of the incumbent population’s reaction to immigration.
Vulnerability comes, according to Orio Giarini, with two risks: human-made risks, also called entrepreneurial risks, and natural or pure risks such as accidents and earthquakes. Both types of risk are growing in dimension and are increasingly interrelated. To control the vulnerability, sophisticated insurance products are called for. Here, mutual insurance is relevant, in particular when risks are large, probabilities uncertain or unknown, and events interrelated or correlated. In this paper the following three examples are discussed and the advantages of mutual insurance are shown: unknown probabilities connected with unforeseeable events, correlated risks and macroeconomic or demographic risks.
We estimate the causal effect of shared e-scooter services on traffic accidents by exploiting the variation in the availability of e-scooter services induced by the staggered rollout across 93 cities in six countries. Police-reported accidents involving personal injuries in the average month increased by around 8.2% after shared e-scooters were introduced. Effects are large during summer and insignificant during winter. Further heterogeneity analysis reveals the largest estimated effects for cities with limited cycling infrastructure, while no effects are detectable in cities with high bike-lane density. This difference suggests that public policy can play a crucial role in mitigating accidents related to e-scooters and, more generally, to changes in urban mobility.
This paper proposes tests for out-of-sample comparisons of interval forecasts based on parametric conditional quantile models. The tests rank the distance between actual and nominal conditional coverage with respect to the set of conditioning variables from all models, for a given loss function. We propose a pairwise test to compare two models for a single predictive interval. The set-up is then extended to a comparison across multiple models and/or intervals. The limiting distribution varies depending on whether models are strictly non-nested or overlapping. In the latter case, degeneracy may occur. We establish the asymptotic validity of wild bootstrap based critical values across all cases. An empirical application to Growth-at-Risk (GaR) uncovers situations in which a richer set of financial indicators are found to outperform a commonly-used benchmark model when predicting downside risk to economic activity.
This study explores the implications of rising markups for optimal Mirrleesian income and profit taxation. Using a stylized model with two individuals, the main forces shaping welfare-optimal policies are analytically characterized. Although a higher profit tax has redistributive benefits, it adversely affects market competition, leading to a greater equilibrium cost-of-living. Rising markups directly contribute to a decline in optimal marginal taxes on labor income. The optimal policy response to higher markups includes increasingly relying on the profit tax to fund redistribution. Declining optimal marginal income taxes assists the redistributive function of the profit tax by contributing to the expansion of the profit tax base. This response alone considerably increases the equilibrium cost-of-living. Nevertheless, a majority of the individuals become better off with the optimal policy. If it is not possible to tax profits optimally, due, for example, to profit shifting, increasing redistribution via income taxes is not optimal; every individual is worse off relative to the scenario with optimal profit taxation.
The debate on monetary and fiscal policy is heavily influenced by estimates of the equilibrium real interest rate. In particular, this concerns estimates derived from a simple aggregate demand and Phillips curve model with time-varying components as proposed by Laubach and Williams (2003). For example, Summers (2014a) refers to these estimates as important evidence for a secular stagnation and the need for fiscal stimulus. Yellen (2015, 2017) has made use of such estimates in order to explain and justify why the Federal Reserve has held interest rates so low for so long. First, we re-estimate the United States equilibrium rate with the methodology of Laubach and Williams (2003). Then, we build on their approach and an alternative specification to provide new estimates for the United States, Germany, the euro area and Japan. Third, we subject these estimates to a battery of sensitivity tests. Due to the great uncertainty and sensitivity that accompany these equilibrium rate estimates, the observed decline in the estimates is not a reliable indicator of a need for expansionary monetary and fiscal policy. Yet, if these estimates are employed to determine the appropriate monetary policy stance, such estimates are better used together with the consistent estimate of the level of potential output.
While the COVID-19 pandemic had a large and asymmetric impact on firms, many countries quickly enacted massive business rescue programs which are specifically targeted to smaller firms. Little is known about the effects of such policies on business entry and exit, investment, factor reallocation, and macroeconomic outcomes. This paper builds a general equilibrium model with heterogeneous and financially constrained firms in order to evaluate the short- and long-term consequences of small firm rescue programs in a pandemic recession. We calibrate the stationary equilibrium and the pandemic shock to the U.S. economy, taking into account the factual Paycheck Protection Program (PPP) as a specific policy. We find that the policy has only a modest impact on aggregate output and employment because (i) jobs are saved predominately in the smallest firms that account for a minor share of employment and (ii) the grant reduces the reallocation of resources towards larger and less impacted firms. Much of the reallocation effects occur in the aftermath of the pandemic episode. By preventing inefficient liquidations, the policy dampens the long-term declines of aggregate consumption and of the real wage, thus delivering small welfare gains.
Nowadays, digitalization has an immense impact on the landscape of jobs. This technological revolution creates new industries and professions, promises greater efficiency and improves the quality of working life. However, emerging technologies such as robotics and artificial intelligence (AI) are reducing human intervention, thus advancing automation and eliminating thousands of jobs and whole occupational images. To prepare employees for the changing demands of work, adequate and timely training of the workforce and real-time support of workers in new positions is necessary. Therefore, it is investigated whether user-oriented technologies, such as augmented reality (AR) and virtual reality (VR) can be applied “on-the-job” for such training and support—also known as intelligence augmentation (IA). To address this problem, this work synthesizes results of a systematic literature review as well as a practically oriented search on augmented reality and virtual reality use cases within the IA context. A total of 150 papers and use cases are analyzed to identify suitable areas of application in which it is possible to enhance employees' capabilities. The results of both, theoretical and practical work, show that VR is primarily used to train employees without prior knowledge, whereas AR is used to expand the scope of competence of individuals in their field of expertise while on the job. Based on these results, a framework is derived which provides practitioners with guidelines as to how AR or VR can support workers at their job so that they can keep up with anticipated skill demands. Furthermore, it shows for which application areas AR or VR can provide workers with sufficient training to learn new job tasks. By that, this research provides practical recommendations in order to accompany the imminent distortions caused by AI and similar technologies and to alleviate associated negative effects on the German labor market.
Goal setting is vital in learning sciences, but the scientific evaluation of optimal learning goals is underexplored. This study proposes a novel methodological approach to determine optimal learning goals. The data in this study comes from a gamified learning app implemented in an undergraduate accounting course at a large German university. With a combination of decision trees and regression analyses, the goals connected to the badges implemented in the app are evaluated. The results show that the initial badge set already motivated learning strategies that led to better grades on the exam. However, the results indicate that the levels of the goals could be improved, and additional badges could be implemented. In addition to new goal levels, new goal types are also discussed. The findings show that learning goals initially determined by the instructors need to be evaluated to offer an optimal motivational effect. The new methodological approach used in this study can be easily transferred to other learning data sets to provide further insights.
Life insurers use accounting and actuarial techniques to smooth reporting of firm assets and liabilities, seeking to transfer surpluses in good years to cover benefit payouts in bad years. Yet these techniques have been criticized as they make it difficult to assess insurers’ true financial status. We develop stylized and realistically-calibrated models of a participating life annuity, an insurance product that pays retirees guaranteed lifelong benefits along with variable non-guaranteed surplus. Our goal is to illustrate how accounting and actuarial techniques for this type of financial contract shape policyholder wellbeing, along with insurer profitability and stability. Smoothing adds value to both the annuitant and the insurer, so curtailing smoothing could undermine the market for long-term retirement payout products.
We investigate how financial literacy shapes older Americans’ demand for financial advice. Using an experimental module fielded in the Health and Retirement Study, we show that financial literacy strongly improves the quality but not the quantity of financial advice sought. In particular, more financially literate people seek financial help from professionals. This effect is more pronounced among older people and those with more wealth and more complex financial positions. Our analysis result implies that financial literacy and financial advisory services are complementary with, rather than substitutes for, each other.
This paper examines heterogeneity in time discounting among a representative sample of elderly Americans, as well as its role in explaining key economic behaviors at older ages. We show how older Americans evaluate simple (hypothetical) inter-temporal choices in which payments today are compared with payments in the future. Using the indicators derived from this measure, we then demonstrate that differences in discounting patterns are associated with characteristics of particular importance in elderly populations. For example, cognitive deficits are associated with greater impatience, whereas bequest motives are associated with less impatience. We then relate our discounting measure to key economic outcomes and find that impatience is associated with lower wealth, fewer investments in health, and less planning for end of life care.
The US Treasury recently permitted deferred longevity income annuities to be included in pension plan menus as a default payout solution, yet little research has investigated whether more people should convert some of the $18 trillion they hold in employer-based defined contribution plans into lifelong income streams. We investigate this innovation using a calibrated lifecycle consumption and portfolio choice model embodying realistic institutional considerations. Our welfare analysis shows that defaulting a modest portion of retirees’ 401(k) assets (over a threshold) is an attractive way to enhance retirement security, enhancing welfare by up to 20% of retiree plan accruals.
Do required minimum distribution 401(k) rules matter, and for whom? Insights from a lifecycle model
(2023)
Tax-qualified vehicles have helped U.S. private-sector workers accumulate $33Tr in retirement plans. An often-overlooked important institutional feature shaping decumulations from these plans is the “Required Minimum Distribution” (RMD) regulation requiring retirees to withdraw a minimum fraction from their retirement accounts or pay excise taxes on withdrawal shortfalls. Our calibrated lifecycle model measures the impact of RMD rules on heterogeneous households’ financial behavior during their work lives and in retirement. The model shows that reforms delaying or eliminating the RMD rules have little effect on consumption profiles, but they would influence withdrawals and tax payments for households with bequest motives.
Artificial Intelligence (AI) and Machine Learning (ML) are currently hot topics in industry and business practice, while management-oriented research disciplines seem reluctant to adopt these sophisticated data analytics methods as research instruments. Even the Information Systems (IS) discipline with its close connections to Computer Science seems to be conservative when conducting empirical research endeavors. To assess the magnitude of the problem and to understand its causes, we conducted a bibliographic review on publications in high-level IS journals. We reviewed 1,838 articles that matched corresponding keyword-queries in journals from the AIS senior scholar basket, Electronic Markets and Decision Support Systems (Ranked B). In addition, we conducted a survey among IS researchers (N = 110). Based on the findings from our sample we evaluate different potential causes that could explain why ML methods are rather underrepresented in top-tier journals and discuss how the IS discipline could successfully incorporate ML methods in research undertakings.
Tail-correlation matrices are an important tool for aggregating risk measurements across risk categories, asset classes and/or business segments. This paper demonstrates that traditional tail-correlation matrices—which are conventionally assumed to have ones on the diagonal—can lead to substantial biases of the aggregate risk measurement’s sensitivities with respect to risk exposures. Due to these biases, decision-makers receive an odd view of the effects of portfolio changes and may be unable to identify the optimal portfolio from a risk-return perspective. To overcome these issues, we introduce the “sensitivity-implied tail-correlation matrix”. The proposed tail-correlation matrix allows for a simple deterministic risk aggregation approach which reasonably approximates the true aggregate risk measurement according to the complete multivariate risk distribution. Numerical examples demonstrate that our approach is a better basis for portfolio optimization than the Value-at-Risk implied tail-correlation matrix, especially if the calibration portfolio (or current portfolio) deviates from the optimal portfolio.
We empirically examine how systemic risk in the banking sector leads to correlated risk in office markets of global financial centers. In so doing, we compute an aggregated measure of systemic risk in financial centers as the cumulated expected capital shortfall of local financial institutions. Our identification strategy is based on a double counterfactual approach by comparing normal with financial distress periods as well as office with retail markets. We find that office market interconnectedness arises from systemic risk during financial turmoil periods. Office market performance in a financial center is affected by returns of systemically linked financial center office markets only during a systemic banking crisis. In contrast, there is no evidence of correlated risk during normal times and among the within-city counterfactual retail sector. The decline in office market returns during a banking crisis is larger in financial centers compared to non-financial centers.
Having a gatekeeper position in a collaborative network offers firms great potential to gain competitive advantages. However, it is not well understood what kind of collaborations are associated with such a position. Conceptually grounded in social network theory, this study draws on the resource-based view and the relational factors view to investigate which types of collaboration characterize firms that are in a gatekeeper position, which ultimately could improve firm performance in subsequent periods. The empirical analysis utilizes a unique longitudinal data set to examine dynamic network formation. We used a data crawling approach to reconstruct collaboration networks among the 500 largest companies in Germany over nine years and matched these networks with performance data. The results indicate that firms in gatekeeper positions often engage in medium-intensity collaborations and less likely weak-intensity collaborations. Strong-intensity collaborations are not related to the likelihood of being a gatekeeper. Our study further reveals that a firm's knowledge base is an important moderator and that this knowledge base can increase the benefits of having a gatekeeper position in terms of firm performance.
Questionable research practices have generated considerable recent interest throughout and beyond the scientific community. We subsume such practices involving secret data snooping that influences subsequent statistical inference under the term MESSing (manipulating evidence subject to snooping) and discuss, illustrate and quantify the possibly dramatic effects of several forms of MESSing using an empirical and a simple theoretical example. The empirical example uses numbers from the most popular German lottery, which seem to suggest that 13 is an unlucky number.
This paper analyzes the scope of the private market for pandemic insurance. We develop a framework that explains theoretically how the equilibrium price of pandemic insurance depends on accumulation risk, covariance between pandemic claims and other claims, and covariance between pandemic claims and the stock market performance. Using the natural catastrophe (NatCat) insurance market as a laboratory, we estimate the relationship between the insurance price markup and the tail characteristics of the loss distribution. Then, by using the high-frequency data tracking the economic impact of the COVID-19 pandemic in the United States, we calibrate the loss distribution of a hypothetical insurance contract designed to alleviate the impact of the pandemic on small businesses. The pandemic insurance contract price markup corresponds to the top 20% markup observed in the NatCat insurance market. Then we analyze an intertemporal risk-sharing scheme that can reduce the expected shortfall of the loss distribution by 50%.
Data is considered the new oil of the economy, but privacy concerns limit their use, leading to a widespread sense that data analytics and privacy are contradictory. Yet such a view is too narrow, because firms can implement a wide range of methods that satisfy different degrees of privacy and still enable them to leverage varied data analytics methods. Therefore, the current study specifies different functions related to data analytics and privacy (i.e., data collection, storage, verification, analytics, and dissemination of insights), compares how these functions might be performed at different levels (consumer, intermediary, and firm), outlines how well different analytics methods address consumer privacy, and draws several conclusions, along with future research directions.
The present study investigates the moderating effect of usage intensity of the social networking site (SNS) Instagram (IG) on the influence of advertisement disclosure types on advertising performance. A national sample (N = 566) participated in a randomized online experiment including a real influencer and followers in order to investigate how different advertisement disclosure types affect advertising performance and how usage intensity moderates this effect. We find that disclosing an influencer’s postings with “#ad” increases the trustworthiness of the influencer and the general credibility of the posting for heavy users, but not for light users. Followership of a user has been found to strongly improve all researched variables (attitude toward product placement, trustworthiness of the spokesperson and general credibility of the posting). This study adds to literature the first distinction on heavy and light usage intensity, and on followership of an IG user when regarding the effects of advertisement disclosure types on advertising performance. To conclude, we present a number of recommendations regarding how advertisers, influencers, and SNS providers should develop strategies for monitoring, understanding, and responding to different social media users, e.g., to closely monitor an influencer’s audience to identify heavy users and optimally target them.
The current economic landscape is complex and globalized, and it imposes on individuals the responsibility for their own financial security. This situation has been intensified by the COVID-19 crisis, since short-time work and layoffs significantly limit the availability of financial resources for individuals. Due to the long duration of the lockdown, these challenges will have a long-term impact and affect the financial well-being of many citizens. Moreover, it can be assumed that the consequences of this crisis will once again particularly affect groups of people who have already frequently been identified as having low financial literacy. Financial literacy is therefore an important target for educational measures and interventions. However, it cannot be considered in isolation but must take into account the many potential factors that influence financial literacy alone or in combination. These include personality traits and socio-demographic factors as well as the (in)ability to defer gratification. Against this background, individualized support offers can be made. With this in mind, in the first step of this study, we analyze the complex interaction of personality traits, socio-demographic factors, the (in-)ability to delay gratification, and financial literacy. In the second step, we differentiate the identified effects regarding different groups to identify moderating effects, which, in turn, allow conclusions to be drawn about the need for individualized interventions. The results show that gender and educational background moderate the effects occurring between self-reported financial literacy, financial learning opportunities, delay of gratification, and financial literacy.
A person's intelligence level positively influences his or her professional success. Gifted and highly intelligent individuals should therefore be successful in their careers. However, previous findings on the occupational situation of gifted adults are mainly known from popular scientific sources in the fields of coaching and self-help groups and confirm prevailing stereotypes that gifted people have difficulties at work. Reliable studies are scarce. This systematic literature review examines 40 studies with a total of 22 job-related variables. Results are shown in general for (a) the employment situation and more specific for the occupational aspects (b) career, (c) personality and behavior, (d) satisfaction, (e) organization, and (f) influence of giftedness on the profession. Moreover, possible differences between female and male gifted individuals and gifted and non-gifted individuals are analyzed. Based on these findings, implications for practice as well as further research are discussed.
The importance of agile methods has increased in recent years, not only to manage IT projects but also to establish flexible and adaptive organisational structures, which are essential to deal with disruptive changes and build successful digital business strategies. This paper takes an industry-specific perspective by analysing the dissemination, objectives and relative popularity of agile frameworks in the German banking sector. The data provides insights into expectations and experiences associated with agile methods and indicates possible implementation hurdles and success factors. Our research provides the first comprehensive analysis of agile methods in the German banking sector. The comparison with a selected number of fintechs has revealed some differences between banks and fintechs. We found that almost all banks and fintechs apply agile methods in IT projects. However, fintechs have relatively more experience with agile methods than banks and use them more intensively. Scrum is the most relevant framework used in practice. Scaled agile frameworks are so far negligible in the German banking sector. Acceleration of projects is apparently the most important objective of deploying agile methods. In addition, agile methods can contribute to cost savings and lead to improved quality and innovation performance, though for banks it is evidently more challenging to reach their respective targets than for fintechs. Overall our findings suggest that German banks are still in a maturing process of becoming more agile and that there is room for an accelerated adoption of agile methods in general and scaled agile frameworks in particular.
This paper examines rent sharing in private investments in public equity (PIPEs) between newly public firms and private investors. The evidence suggests highly asymmetric rent sharing. Newly public firms earn a negative return of up to −15% in the first post-PIPE year, while investors benefit due to the ability to dictate transaction terms. The results are economically relevant because newly public firms are, at least in recent years, more likely to tap private rather than public markets for follow-on financing shortly after the initial public offering (IPO), and because the results for newly public firms contrast with those for the broad PIPE market in Lim et al. (2021). The study also contributes to the PIPE literature by offering an integrative view of competing theories of the cross-section of post-PIPE stock returns. We simultaneously test proxies for corporate governance, asymmetric information, bargaining power, and managerial entrenchment. While all explanations have univariate predictive power for the post-PIPE performance, only the proxies for corporate governance and asymmetric information are robust in ceteris-paribus tests.
We use census data to show that structural transformation reflects a fundamental reallocation of labour from goods to services, instead of a relabelling that occurs when goods-producing firms outsource their in-house service production. The novelty of our approach is that it categorizes labour by occupations, which are invariant to outsourcing. We find that the reallocation of labour from goods-producing to service-producing occupations is a robust feature in censuses from around the world and different time periods. To understand the underlying forces, we propose a tractable model in which uneven occupation-specific technological change generates structural transformation of occupation employment.
We propose a novel approach to the study of international trade based on a theory of country integration that embodies a broad systemic viewpoint on the relationship between trade and growth. Our model leads to an indicator of country openness that measures a country's level of integration through the full architecture of its connections in the trade network. We apply our methodology to a sample of 204 countries and find a sizable and significant positive relationship between our integration measure and a country's growth rate, while that of the traditional measures of outward orientation is only minor and statistically insignificant.
This paper defends The Transformation of Values into Prices on the Basis of Random Systems, published in EIER, by answering to the Comments made in the same journal by Professors Mori, Morioka and Yamazaki. The clarifications mainly concern the justification of the randomness assumptions, the conditions needed to obtain the equality of total profit with total surplus value in the simplified one-industry system and the invariance of the results to changes in the units of measurement.
Sample-based longitudinal discrete choice experiments: preferences for electric vehicles over time
(2021)
Discrete choice experiments have emerged as the state-of-the-art method for measuring preferences, but they are mostly used in cross-sectional studies. In seeking to make them applicable for longitudinal studies, our study addresses two common challenges: working with different respondents and handling altering attributes. We propose a sample-based longitudinal discrete choice experiment in combination with a covariate-extended hierarchical Bayes logit estimator that allows one to test the statistical significance of changes. We showcase this method’s use in studies about preferences for electric vehicles over six years and empirically observe that preferences develop in an unpredictable, non-monotonous way. We also find that inspecting only the absolute differences in preferences between samples may result in misleading inferences. Moreover, surveying a new sample produced similar results as asking the same sample of respondents over time. Finally, we experimentally test how adding or removing an attribute affects preferences for the other attributes.
We have designed and implemented an experimental module in the 2014 Health and Retirement Study to measure older persons' willingness to defer claiming of Social Security benefits. Under the current system’ status quo where delaying claiming boosts eventual benefits, we show that 46% of the respondents would delay claiming and work longer. If respondents were instead offered an actuarially fair lump sum payment instead of higher lifelong benefits, about 56% indicate they would delay claiming. Without a work requirement, the average amount needed to induce delayed claiming is only $60,400, while when part-time work is stipulated, the amount is slightly higher, $66,700. This small difference implies a low utility value of leisure foregone, of under 20% of average household income.
The modern tontine : an innovative instrument for longevity risk management in an aging society
(2020)
We investigate whether a historical pension concept, the tontine, yields enough innovative potential to extend and improve the prevailing privately funded pension solutions in a modern way. The tontine basically generates an age-increasing cash flow, which can help to match the increasing financing needs at old ages. In contrast to traditional pension products, however, the tontine generates volatile cash flows, which means that the insurance character of the tontine cannot be guaranteed in every situation. By employing Multi Cumulative Prospect Theory (MCPT) we answer the question to what extent tontines can be a complement to or a substitute for traditional annuities. We find that it is only optimal to invest in tontines for a certain range of initial wealth. In addition, we investigate in how far the tontine size, the volatility of individual liquidity needs and expected mortality rates contribute to the demand for tontines.
Crowdfunding platforms offer project initiators the opportunity to acquire funds from the Internet crowd and, therefore, have become a valuable alternative to traditional sources of funding. However, some processes on crowdfunding platforms cause undesirable external effects that influence the funding success of projects. In this context, we focus on the phenomenon of project overfunding. Massively overfunded projects have been discussed to overshadow other crowdfunding projects which in turn receive less funding. We propose a funding redistribution mechanism to internalize these overfunding externalities and to improve overall funding results. To evaluate this concept, we develop and deploy an agent-based model (ABM). This ABM is based on a multi-attribute decision-making approach and is suitable to simulate the dynamic funding processes on a crowdfunding platform. Our evaluation provides evidence that possible modifications of the crowdfunding mechanisms bear the chance to optimize funding results and to alleviate existing flaws.
Correction to: Computational Economics https://doi.org/10.1007/s10614-020-10061-x
The original publication has been updated. In the original publication of this article, under the Introduction heading section, the corrections to the second paragraph’s inline equation were not incorporated. The author’s additional corrections have also been incorporated. The publisher apologizes for the error made during production.
India has recorded 142,186 deaths over 36 administrative regions placing India third in the world after the US and Brazil for COVID-19 deaths as of 12 December 2020. Studies indicate that south-west monsoon season plays a role in the dynamics of contagious diseases, which tend to peak post-monsoon season. Recent studies show that vitamin D and its primary source Ultraviolet-B (UVB) radiation may play a protective role in mitigating COVID-19 deaths. However, the combined roles of the monsoon season and UVB radiation in COVID-19 in India remain still unclear. In this observational study, we empirically study the respective roles of monsoon season and UVB radiation, whilst further exploring, whether the monsoon season negatively impacts the protective role of UVB radiation in COVID-19 deaths in India. We use a log-linear Mundlak model to a panel dataset of 36 administrative regions in India from 14 March 2020–19 November 2020 (n = 6751). We use the cumulative COVID-19 deaths as the dependent variable. We isolate the association of monsoon season and UVB radiation as measured by Ultraviolet Index (UVI) from other confounding time-constant and time-varying region-specific factors. After controlling for various confounding factors, we observe that a unit increase in UVI and the monsoon season are separately associated with 1.2 percentage points and 7.5 percentage points decline in growth rates of COVID-19 deaths in the long run. These associations translate into substantial relative changes. For example, a permanent unit increase of UVI is associated with a decrease of growth rates of COVID-19 deaths by 33% (= − 1.2 percentage points) However, the monsoon season, mitigates the protective role of UVI by 77% (0.92 percentage points). Our results indicate a protective role of UVB radiation in mitigating COVID-19 deaths in India. Furthermore, we find evidence that the monsoon season is associated with a significant reduction in the protective role of UVB radiation. Our study outlines the roles of the monsoon season and UVB radiation in COVID-19 in India and supports health-related policy decision making in India.
Shares of open-end real estate funds are typically traded directly between the investor and the fund management company. However, we provide empirical evidence for the growth of secondary market activities, i.e., the trading of shares on stock exchanges. We find high trading levels in situations where the fund management company suspends the issue or redemption of shares. Shares trade at a discount when the fund management company suspends the redemption, whereas shares trade at a premium when the fund management company suspends the issue. We also find evidence that secondary market trading activity is increasing since German regulation introduced a minimum holding period and a mandatory notice period for open-end real estate funds.
Consider two independent random walks. By chance, there will be spells of association between them where the two processes move in the same direction, or in opposite direction. We compute the probabilities of the length of the longest spell of such random association for a given sample size, and discuss measures like mean and mode of the exact distributions. We observe that long spells (relative to small sample sizes) of random association occur frequently, which explains why nonsense correlation between short independent random walks is the rule rather than the exception. The exact figures are compared with approximations. Our finite sample analysis as well as the approximations rely on two older results popularized by Révész (Stat Pap 31:95–101, 1990, Statistical Papers). Moreover, we consider spells of association between correlated random walks. Approximate probabilities are compared with finite sample Monte Carlo results.
Vehicle registrations have been shown to strongly react to tax reforms aimed at reducing CO2 emissions from passengers’ cars, but are the effects equally strong for positive and negative tax changes? The literature on asymmetric reactions to price and tax changes has documented asymmetries for everyday goods but has not yet considered durables. We leverage multiple vehicle registration tax (VRT) reforms in Norway and estimate their impact on within car-model substitutions. We estimate stronger effects for cars receiving tax cuts and rebates than for those affected by tax increases. The corresponding estimated elasticity is − 1.99 for VRT decreases and 0.77 for increases. As consumers may also substitute across car models, our estimates represent a lower bound.
This paper uses historical monthly temperature level data for a panel of 114 countries to identify the effects of within year temperature level variability on productivity growth in five different macro regions, i.e., (1) Africa, (2) Asia, (3) Europe, (4) North America and (5) South America. We find two primary results. First, higher intra-annual temperature variability reduces (increases) productivity in Europe and North America (Asia). Second, higher intra-annual temperature variability has no significant effects on productivity in Africa and South America. Additional empirical tests indicate also the following: (1) rising intra-annual temperature variability reduces productivity (even thought less significantly)in both tropical and non-tropical regions, (2) inter-annual temperature variability reduces (increases) productivity in North America (Europe) and (3) winter and summer inter-annual temperature variability generates a drop in productivity in both Europe and North America. Taken together, these findings indicate that temperature variability shocks tend to have stronger adverse economic effects among richer economies. In a production economy featuring long-run productivity and temperature volatility shocks, we quantify these negative impacts and find welfare losses of 2.9% (1%) in Europe (North America).
Solving High-Dimensional Dynamic Portfolio Choice Models with Hierarchical B-Splines on Sparse Grids
(2021)
Discrete time dynamic programming to solve dynamic portfolio choice models has three immanent issues: firstly, the curse of dimensionality prohibits more than a handful of continuous states. Secondly, in higher dimensions, even regular sparse grid discretizations need too many grid points for sufficiently accurate approximations of the value function. Thirdly, the models usually require continuous control variables, and hence gradient-based optimization with smooth approximations of the value function is necessary to obtain accurate solutions to the optimization problem. For the first time, we enable accurate and fast numerical solutions with gradient-based optimization while still allowing for spatial adaptivity using hierarchical B-splines on sparse grids. When compared to the standard linear bases on sparse grids or finite difference approximations of the gradient, our approach saves an order of magnitude in total computational complexity for a representative dynamic portfolio choice model with varying state space dimensionality, stochastic sample space, and choice variables.
The mobile games business is an ever-increasing sub-sector of the entertainment industry. Due to its high profitability but also high risk and competitive atmosphere, game publishers need to develop strategies that allow them to release new products at a high rate, but without compromising the already short lifespan of the firms' existing games. Successful game publishers must enlarge their user base by continually releasing new and entertaining games, while simultaneously motivating the current user base of existing games to remain active for more extended periods. Since the core-component reuse strategy has proven successful in other software products, this study investigates the advantages and drawbacks of this strategy in mobile games. Drawing on the widely accepted Product Life Cycle concept, the study investigates whether the introduction of a new mobile game built with core-components of an existing mobile game curtails the incumbent's product life cycle. Based on real and granular data on the gaming activity of a popular mobile game, the authors find that by promoting multi-homing (i.e., by smartly interlinking the incumbent and new product with each other so that users start consuming both games in parallel), the core-component reuse strategy can prolong the lifespan of the incumbent game.
Digital wealth and its necessary regulation have gained prominence in recent years. The European Commission has published several documents and policy proposals relating, directly or indirectly, to the data economy. A data economy can be defined as an ecosystem of different types of market players collaborating to ensure that data is accessible and usable in order to extract value from data through, for example, creating a variety of applications with great potential to improve daily life. The value of data can increase from EUR 257 billion (1.85 of EU Gross Domestic Product (GDP)) to EUR 643 billion by 2020 (3.17% of EU GDP), according to the EU Commission. The legal implications of the increasing value of the data economy are clear; hence the need to address the challenges presented by its legal regulation.
The health and genetic data of deceased people are a particularly important asset in the field of biomedical research. However, in practice, using them is compli- cated, as the legal framework that should regulate their use has not been fully developed yet. The General Data Protection Regulation (GDPR) is not applicable to such data and the Member States have not been able to agree on an alternative regulation. Recently, normative models have been proposed in an attempt to face this issue. The most well- known of these is posthumous medical data donation (PMDD). This proposal supports an opt-in donation system of health data for research purposes. In this article, we argue that PMDD is not a useful model for addressing the issue at hand, as it does not consider that some of these data (the genetic data) may be the personal data of the living relatives of the deceased. Furthermore, we find the reasons supporting an opt-in model less convincing than those that vouch for alternative systems. Indeed, we propose a normative framework that is based on the opt-out system for non-personal data combined with the application of the GDPR to the relatives’ personal data.
The quality of life: protecting non-personal interests and non-personal data in the age of big data
(2021)
Under the current legal paradigm, the rights to privacy and data protection provide natural persons with subjective rights to protect their private interests, such as related to human dignity, individual autonomy and personal freedom. In principle, when data processing is based on non-personal or aggregated data or when such data pro- cesses have an impact on societal, rather than individual interests, citizens cannot rely on these rights. Although this legal paradigm has worked well for decades, it is increasingly put under pressure because Big Data processes are typically based indis- criminate rather than targeted data collection, because the high volumes of data are processed on an aggregated rather than a personal level and because the policies and decisions based on the statistical correlations found through algorithmic analytics are mostly addressed at large groups or society as a whole rather than specific individuals. This means that large parts of the data-driven environment are currently left unregu- lated and that individuals are often unable to rely on their fundamental rights when addressing the more systemic effects of Big Data processes. This article will discuss how this tension might be relieved by turning to the notion ‘quality of life’, which has the potential of becoming the new standard for the European Court of Human Rights (ECtHR) when dealing with privacy related cases.
Ownership of databases: personal data protection and intellectual property rights on databases
(2021)
When we think on initiatives on access to and reuse of data, we must consider both the European Intellectual Property Law and the General Data Protection Regulation (GDPR). The first one provides a special intellectual property (IP) right – the sui generis right – for those makers that made a substantial investment when creating the database, whether it contains personal or non-personal data. That substantial investment can be made by just one person, but, in many cases, it is the result of the activities of many people and/or some undertakings processing and aggregating data. In the modern digital economy, data are being dubbed the ‘new oil’ and the sui generis right might be con- sidered a right to control any access to the database, thus having an undeniable relevance. Besides, there are still important inconsistences between IP Law and the GDPR, which must be removed by the European legislator. The genuine and free consent of the data subject for the use of his/her data must remain the first step of the legal analysis.
Commercialization of consumers’ personal data in the digital economy poses serious, both conceptual and practical, challenges to the traditional approach of European Union (EU) Consumer Law. This article argues that mass-spread, automated, algorithmic decision-making casts doubt on the foundational paradigm of EU consumer law: consent and autonomy. Moreover, it poses threats of discrimination and under- mining of consumer privacy. It is argued that the recent legislative reaction by the EU Commission, in the form of the ‘New Deal for Consumers’, was a step in the right direction, but fell short due to its continued reliance on consent, autonomy and failure to adequately protect consumers from indirect discrimination. It is posited that a focus on creating a contracting landscape where the consumer may be properly informed in material respects is required, which in turn necessitates blending the approaches of competition, consumer protection and data protection laws.
What are the effects of the GDPR on consumer apps? This article presents an analysis of app behavior before and after the regulatory change in data protection in Europe. Based on long-term data collection, we present differences in app permission use and expressed user concerns and discuss their implications. In May 2018, the General Data Protection Regulation (GDPR) changed the data protection obligations of the information industry with the European Union users substantially. One should expect to find changes in code, program behavior and data collection activities. To investigate this expectation, we analyzed data about Android apps request and use of permissions to access sensitive group of data on smartphones, and collected user reviews. Our data shows an overall reduction of both permissions used and of expressed user concern. However, in some areas apps have increased access or user complaints while in addition, many apps carry with them several unused access privileges.
Public kindergarten, maternal labor supply, and earnings in the longer run: too little too late?
(2021)
By facilitating early re-entry to the labor market after childbirth, public kindergarten might positively affect maternal human capital and labor market outcomes: Are such effects long-lasting? Can we rely on between-individuals differences in quarter of birth to identify them? I isolate the effects of interest from spurious associations through difference-in-difference, exploiting across-states and over-time variation in public kindergarten eligibility regulations in the United States. The estimates suggest a very limited impact in the first year, and no longer-run impacts. Even in states where it does not affect kindergarten eligibility, quarter of birth is strongly and significantly correlated with maternal outcomes.
This article investigates the roles of psychological biases for deviations between subjective survival beliefs (SSBs) and objective survival probabilities. We model these deviations through age-dependent inverse S-shaped probability weighting functions. Our estimates suggest that implied measures for cognitive weakness increase and relative optimism decrease with age. Direct measures of cognitive weakness and optimism share these trends. Our regression analyses confirm that these factors play strong quantitative roles in the formation of SSBs. Our main finding is that cognitive weakness instead of optimism becomes with age an increasingly important contributor to the well-documented overestimation of survival chances in old age.
The term structure of interest rates is crucial for the transmission of monetary policy to financial markets and the macroeconomy. Disentangling the impact of monetary policy on the components of interest rates, expected short rates, and term premia is essential to understanding this channel. To accomplish this, we provide a quantitative structural model with endogenous, time-varying term premia that are consistent with empirical findings. News about future policy, in contrast to unexpected policy shocks, has quantitatively significant effects on term premia along the entire term structure. This provides a plausible explanation for partly contradictory estimates in the empirical literature.
Contemporary information systems make widespread use of artificial intelligence (AI). While AI offers various benefits, it can also be subject to systematic errors, whereby people from certain groups (defined by gender, age, or other sensitive attributes) experience disparate outcomes. In many AI applications, disparate outcomes confront businesses and organizations with legal and reputational risks. To address these, technologies for so-called “AI fairness” have been developed, by which AI is adapted such that mathematical constraints for fairness are fulfilled. However, the financial costs of AI fairness are unclear. Therefore, the authors develop AI fairness for a real-world use case from e-commerce, where coupons are allocated according to clickstream sessions. In their setting, the authors find that AI fairness successfully manages to adhere to fairness requirements, while reducing the overall prediction performance only slightly. However, they find that AI fairness also results in an increase in financial cost. Thus, in this way the paper’s findings contribute to designing information systems on the basis of AI fairness.
Strict environmental regulation may deter foreign direct investment (FDI). The paper develops the hypothesis that regulation predominantly discourages FDI that is conducted as Greenfield investment rather than mergers and acquisitions (M&A). The hypothesis is tested with German firm-level FDI data. Empirically, stricter regulation reduces new Greenfield projects in polluting industries, but indeed has a much smaller impact on the number of M&As. This significant difference is compatible with the fact that existing operations often benefit from grandfathering rules, which provide softer regulation for pre-exisiting plants, and with the expectation that for M&As part of the regulation is capitalized in the purchase price. The heterogeneous effects help explaining mixed results in previous studies that have neglected the mode of entry.
We analyze the extent to which individual audit partners influence the audited narrative disclosures in their clients’ financial reports. Using a sample of 3,281,423 private and public client firm-pairs, we find that the similarity among audited narrative disclosures is higher when two client firms share the same audit partner. Specifically, we find that the wording similarity of management reports (notes) increases by 30 (48) percent, the content similarity by 29 (49) percent, and the structure similarity by 48 (121) percent. Moreover, we find that audit partners in particular are relevant for their clients’ narrative disclosures because the increase in narrative disclosure similarity when sharing the same audit partner is nine (four) times greater than when sharing the same audit firm (audit office). We show that this influence of audit partners goes beyond adding boilerplate statements and, using novel field evidence, we shed light on the underlying mechanisms. Our findings are economically relevant because a stronger involvement of audit partners with their clients’ narratives is associated with a higher quality of narrative disclosures, which helps users better predict the future profitability of client firms.
This study simulates three income tax scenarios in a Mirrleesian setting for 24 EU countries using data from the 2014 Structure of Earnings Survey. In scenario 1, each country individually maximizes its own welfare (benchmark). In scenarios 2 and 3, total welfare in the EU is maximized over a common budget constraint. Unlike scenario 2, the social planner of scenario 3 differentiates taxes by country of residence. If a common tax and transfer system were implemented in the EU, countries with a relatively higher mean wage rate—particularly those in Western and some of the Northern European countries—would transfer resources to the others. Scenario 2 implies increased labor distortions for almost all countries and, hence, leads to a contraction in total output. Scenario 3 produces higher (lower) marginal taxes for high- (low-) mean countries compared to the benchmark. The change in total output depends on the income effects on labor supply. Overall, total welfare is higher for the scenarios involving a European tax and transfer system despite more than two thirds of all the agents becoming worse off relative to the benchmark. A politically more feasible integrated tax system improves the well-being of almost half of all the EU but considerably reduces the aggregate welfare benefits.
Device-to-device (D2D) communication is an innovative solution for improving wireless network performance to efficiently handle the ever-increasing mobile data traffic. Communication takes place directly between two devices that are in each other’s transmission range. So far, research has focused on the technical challenges of implementing this technology and assumes a user’s general willingness to participate as forwarder in this technology. However, this simplifying assumption is not realistic, as willingness to participate in D2D communication can vary depending on the user. In this work, we consider the scenario that a user can act as a forwarder for a receiver who is not directly or insufficiently reached by the base station and accordingly has no or poor Internet connection. We take a user-centric approach and investigate the willingness to provide an Internet connection as a forwarder. We are the first to investigate user preferences for D2D communication using a choice-based conjoint analysis. Our results, based on a representative sample of potential users (N=181), show that the social relationship between the potential forwarder and the receiver has the greatest impact on the potential forwarder’s decision to provide an Internet connection to the receiver, accepting sacrifices in terms of additional battery consumption and reduced own service performance. In a detailed segment analysis, we observe significant preference differences depending on smartphone usage behavior and user age. Taking the corresponding preferences into account when matching forwarders and receivers can further increase technology adoption.
We analyze the joint dynamics of prices, productivity, and employment across firms, building a dynamic equilibrium model of heterogeneous firms who compete for workers and customers in frictional labor and product markets. Using panel data on prices and output for German manufacturing firms, the model is calibrated to evaluate the quantitative contributions of productivity and demand for the labor market. Product market frictions decisively dampen the firms' employment adjustments to productivity shocks. We further analyze the impact of aggregate shocks to the first and second moments of productivity and demand and relate them to business-cycle features in our data.
When requesting a web-based service, users often fail in setting the website’s privacy settings according to their self privacy preferences. Being overwhelmed by the choice of preferences, a lack of knowledge of related technologies or unawareness of the own privacy preferences are just some reasons why users tend to struggle. To address all these problems, privacy setting prediction tools are particularly well-suited. Such tools aim to lower the burden to set privacy preferences according to owners’ privacy preferences. To be in line with the increased demand for explainability and interpretability by regulatory obligations – such as the General Data Protection Regulation (GDPR) in Europe – in this paper an explainable model for default privacy setting prediction is introduced. Compared to the previous work we present an improved feature selection, increased interpretability of each step in model design and enhanced evaluation metrics to better identify weaknesses in the model’s design before it goes into production. As a result, we aim to provide an explainable and transparent tool for default privacy setting prediction which users easily understand and are therefore more likely to use.
We analyze limit order book resiliency following liquidity shocks initiated by large market orders. Based on a unique data set, we investigate whether high‐frequency traders are involved in replenishing the order book. Therefore, we relate the net liquidity provision of high‐frequency traders, algorithmic traders, and human traders around these market impact events to order book resiliency. Although all groups of traders react, our results show that only high‐frequency traders reduce the spread within the first seconds after the market impact event. Order book depth replenishment, however, takes significantly longer and is mainly accomplished by human traders’ liquidity provision.
Privacy concerns as well as trust and risk beliefs are important factors that can influence users’ decision to use a service. One popular model that integrates these factors is relating the Internet Users Information Privacy Concerns (IUIPC) construct to trust and risk beliefs. However, studies haven’t yet applied it to a privacy enhancing technology (PET) such as an anonymization service. Therefore, we conducted a survey among 416 users of the anonymization service JonDonym [1] and collected 141 complete questionnaires. We rely on the IUIPC construct and the related trust-risk model and show that it needs to be adapted for the case of PETs. In addition, we extend the original causal model by including trust beliefs in the anonymization service provider and show that they have a significant effect on the actual use behavior of the PET.
In the upcoming years, the internet of things (IoT)will enrich daily life. The combination of artificial intelligence(AI) and highly interoperable systems will bring context-sensitive multi-domain services to reality. This paper describesa concept for an AI-based smart living platform with open-HAB, a smart home middleware, and Web of Things (WoT) askey components of our approach. The platform concept con-siders different stakeholders, i.e. the housing industry, serviceproviders, and tenants. These activities are part of the Fore-Sight project, an AI-driven, context-sensitive smart living plat-form.
This article studies whether people want to control what information on their own past pro-social behavior is revealed to others. Participants are assigned a color that depends on their past pro-social behavior. They can spend money to manipulate the probability with which their color is revealed to another participant. The data show that participants are more likely to reveal colors with more favorable informational content. This pattern is not found in a control treatment in which colors are randomly assigned, thus revealing nothing about past pro-social behavior. Regression analysis confrms these fndings, also when controlling for past pro-social behavior. These results complement the existing empirical evidence, confrming that people strategically and, therefore, consciously manipulate their social image.
Optimal investment decisions by institutional investors require accurate predictions with respect to the development of stock markets. Motivated by previous research that revealed the unsatisfactory performance of existing stock market prediction models, this study proposes a novel prediction approach. Our proposed system combines Artificial Intelligence (AI) with data from Virtual Investment Communities (VICs) and leverages VICs’ ability to support the process of predicting stock markets. An empirical study with two different models using real data shows the potential of the AI-based system with VICs information as an instrument for stock market predictions. VICs can be a valuable addition but our results indicate that this type of data is only helpful in certain market phases.
Participation in further education is a central success factor for economic growth and societal as well as individual development. This is especially true today because in most industrialized countries, labor markets and work processes are changing rapidly. Data on further education, however, show that not everybody participates and that different social groups participate to different degrees. Activities in continuous vocational education and training (CVET) are mainly differentiated as formal, non-formal and informal CVET, whereby further differences between offers of non-formal and informal CVET are seldom elaborated. Furthermore, reasons for participation or non-participation are often neglected. In this study, we therefore analyze and compare predictors for participation in both forms of CVET, namely, non-formal and informal. To learn more about the reasons for participation, we focus on the individual perspective of employees (invidual factors, job-related factors, and learning biography) and additionally integrate institutional characteristics (workplace and company-based characteristics). The results mainly show that non-formal CVET is still strongly influenced by institutional settings. In the case of informal CVET, on the other hand, the learning biography plays a central role.
Learning to fly through informational turbulence: critical thinking and the case of the minimum wage
(2020)
The paper addresses online reasoning and information processing with respect to a much debated issue: the pros and cons of the minimum wage. Like with all controversial issues, one can easily remain in a self-reinforcing bubble, once one has taken sides, and immunize oneself against criticism. Paradoxically, the more information we have at our disposal, the easier this gets (Roetzel, 2019). The only (and possibly universal) antidote seems to be “critical thinking” (Ennis, 1987, 2011). However, critical thinking is a very broad concept, purported to include diverse kinds of information processing, and it is also thought to be content-specific. Therefore, we aim at addressing both understanding of content knowledge and reasoning processes. We pursue three goals with this paper: First, we conduct a conceptual analysis of the learning content and of reasoning patterns for and against the minimum wage. Second, we explicate an inferential framework that can be applied for processes of critical thinking. Third, teaching strategies are discussed to support reasoning processes and to promote critical thinking skills.
Pokémon Go is one of the most successful mobile games of all time. Millions played and still play this mobile augmented reality (AR) application, although severe privacy issues are pervasive in the app due to its use of several sensors such as location data and camera. In general, individuals regularly use online services and mobile apps although they might know that the use is associated with high privacy risks. This seemingly contradictory behavior of users is analyzed from a variety of different perspectives in the information systems domain. One of these perspectives evaluates privacy-related decision making processes based on concepts from behavioral economics. We follow this line of work by empirically testing one exemplary extraneous factor within the “enhanced APCO model” (antecedents–privacy concerns–outcome). Specific empirical tests on such biases are rare in the literature which is why we propose and empirically analyze the extraneous influence of a positivity bias. In our case, we hypothesize that the bias is induced by childhood brand nostalgia towards the Pokémon franchise. We analyze our proposition in the context of an online survey with 418 active players of the game. Our results indicate that childhood brand nostalgia influences the privacy calculus by exerting a large effect on the benefits within the trade-off and, therefore, causing a higher use frequency. Our work shows two important implications. First, the behavioral economics perspective on privacy provides additional insights relative to previous research. However, the effects of several other biases and heuristics have to be tested in future work. Second, relying on nostalgia represents an important, but also double-edged, instrument for practitioners to market new services and applications.
Inflation ist ein Konstrukt. Sie wird von unterschiedlichen Akteur*innen unterschiedlich wahrgenommen. Zum Teil passiert dies, weil Warenkörbe differieren, zum Teil weil Erwartungen unterschiedlich gebildet werden. Dieser Beitrag diskutiert die Heterogenität der Infl ation und ihrer Wahrnehmung und was dies für die Zielgröße der Zentralbankpolitik bedeutet.
Prior studies indicate the protective role of Ultraviolet-B (UVB) radiation in human health, mediated by vitamin D synthesis. In this observational study, we empirically outline a negative association of UVB radiation as measured by ultraviolet index (UVI) with the number of COVID-19 deaths. We apply a fixed-effect log-linear regression model to a panel dataset of 152 countries over 108 days (n = 6524). We use the cumulative number of COVID-19 deaths and case-fatality rate (CFR) as the main dependent variables and isolate the UVI effect from potential confounding factors. After controlling for time-constant and time-varying factors, we find that a permanent unit increase in UVI is associated with a 1.2 percentage points decline in daily growth rates of cumulative COVID-19 deaths [p < 0.01] and a 1.0 percentage points decline in the CFR daily growth rate [p < 0.05]. These results represent a significant percentage reduction in terms of daily growth rates of cumulative COVID-19 deaths (− 12%) and CFR (− 38%). We find a significant negative association between UVI and COVID-19 deaths, indicating evidence of the protective role of UVB in mitigating COVID-19 deaths. If confirmed via clinical studies, then the possibility of mitigating COVID-19 deaths via sensible sunlight exposure or vitamin D intervention would be very attractive.
We model the decisions of young individuals to stay in school or drop out and engage in criminal activities. We build on the literature on human capital and crime engagement and use the framework of Banerjee (1993) that assumes that the information needed to engage in crime arrives in the form of a rumour and that individuals update their beliefs about the profitability of crime relative to education. These assumptions allow us to study the effect of social interactions on crime. In our model, we investigate informational spillovers from the actions of talented students to less talented students. We show that policies that decrease the cost of education for talented students may increase the vulnerability of less talented students to crime. The effect is exacerbated when students do not fully understand the underlying learning dynamics.
This article discusses the counterpart of interactive machine learning, i.e., human learning while being in the loop in a human-machine collaboration. For such cases we propose the use of a Contradiction Matrix to assess the overlap and the contradictions of human and machine predictions. We show in a small-scaled user study with experts in the area of pneumology (1) that machine-learning based systems can classify X-rays with respect to diseases with a meaningful accuracy, (2) humans partly use contradictions to reconsider their initial diagnosis, and (3) that this leads to a higher overlap between human and machine diagnoses at the end of the collaboration situation. We argue that disclosure of information on diagnosis uncertainty can be beneficial to make the human expert reconsider her or his initial assessment which may ultimately result in a deliberate agreement. In the light of the observations from our project, it becomes apparent that collaborative learning in such a human-in-the-loop scenario could lead to mutual benefits for both human learning and interactive machine learning. Bearing the differences in reasoning and learning processes of humans and intelligent systems in mind, we argue that interdisciplinary research teams have the best chances at tackling this undertaking and generating valuable insights.
Information asymmetry and its implications in online purchasing behaviour: a country case study
(2020)
The objective of this study is to analyse how certain variables in the online market affect the decision-making trajectory and actions toward reducing the information asymmetry faced in online purchasing. A survey and observation are conducted in order to understand the behavior and perceptions of online buyers toward the information given in online platforms. Descriptive and correlation analysis have been employed in order to evaluate the data collected and test the correlation between variables of the research model. It results that most participants take for granted the fact that sellers have more information than them when entering into a transaction agreement and this makes them feel inferior towards the superior power sellers possess in such interactions. This makes the traditional markets more preferred for them, however multiple sources such as reviews and ratings result as an alternative way of reducing the perceived information asymmetry.
With the rapid growth of technology in recent years, we are surrounded by or even dependent on the use of technological devices such as smartphones as they are now an indispensable part of our life. Smartphone applications (apps) provide a wide range of utilities such as navigation, entertainment, fitness, etc. To provide such context-sensitive services to users, apps need to access users' data including sensitive ones, which in turn, can potentially lead to privacy invasions. To protect users against potential privacy invasions in such a vulnerable ecosystem, legislation such as the European Union General Data Protection Regulation (EU GDPR) demands best privacy practices. Therefore, app developers are required to make their apps compatible with legal privacy principles enforced by law. However, this is not an easy task for app developers to comprehend purely legal principles to understand what needs to be implemented. Similarly, bridging the gap between legal principles and technical implementations to understand how legal principles need to be implemented is another barrier to develop privacy-friendly apps. To this end, this paper proposes a privacy and security design guide catalog for app developers to assist them in understanding and adopting the most relevant privacy and security principles in the context of smartphone apps. The presented catalog is aimed at mapping the identified legal principles to practical privacy and security solutions that can be implemented by developers to ensure enhanced privacy aligned with existing legislation. Through conducting a case study, it is confirmed that there is a significant gap between what developers are doing in reality and what they promise to do. This paper provides researchers and developers of privacy-related technicalities an overview of the characteristics of existing privacy requirements needed to be implemented in smartphone ecosystems, on which they can base their work.
One striking observation in Parkinson’s disease (PD) is the remarkable gender difference in incidence and prevalence of the disease. Data on gender differences with regard to disease onset, motor and non-motor symptoms, and dopaminergic medication are limited. Furthermore, whether estrogen status affects disease onset and progression of PD is controversially discussed. In this retrospective single center study, we extracted clinical data of 226 ambulatory PD patients and compared age of disease onset, disease stage, motor impairment, non-motor symptoms, and dopaminergic medication between genders. We applied a matched-pairs design to adjust for age and disease duration. To determine the effect of estrogen-related reproductive factors including number of children, age at menarche, and menopause on the age of onset, we applied a standardized questionnaire and performed a regression analysis. The male to female ratio in the present PD cohort was 1.9:1 (147 men vs. 79 women). Male patients showed increased motor impairment than female patients. The levodopa equivalent daily dose was increased by 18.9% in male patients compared to female patients. Matched-pairs analysis confirmed the increased dose of dopaminergic medication in male patients. No differences were observed in age of onset, type of medication, and non-motor symptoms between both groups. Female reproductive factors including number of children, age at menarche, and age at menopause were positively associated with a delay of disease onset up to 30 months. The disease-modifying role of estrogen-related outcome measures warrants further clinical and experimental studies targeting gender differences, specifically hormone-dependent pathways in PD.
Making agriculture sustainable is a global challenge. In the European Union (EU), the Common Agricultural Policy (CAP) is failing with respect to biodiversity, climate, soil, land degradation as well as socio‐economic challenges.
The European Commission's proposal for a CAP post‐2020 provides a scope for enhanced sustainability. However, it also allows Member States to choose low‐ambition implementation pathways. It therefore remains essential to address citizens' demands for sustainable agriculture and rectify systemic weaknesses in the CAP, using the full breadth of available scientific evidence and knowledge.
Concerned about current attempts to dilute the environmental ambition of the future CAP, and the lack of concrete proposals for improving the CAP in the draft of the European Green Deal, we call on the European Parliament, Council and Commission to adopt 10 urgent action points for delivering sustainable food production, biodiversity conservation and climate mitigation.
Knowledge is available to help moving towards evidence‐based, sustainable European agriculture that can benefit people, nature and their joint futures.
The statements made in this article have the broad support of the scientific community, as expressed by above 3,600 signatories to the preprint version of this manuscript. The list can be found here (https://doi.org/10.5281/zenodo.3685632).
A free Plain Language Summary can be found within the Supporting Information of this article.
Perspectives on participation in continuous vocational education training - an interview study
(2020)
In European industrialized countries, a large number of companies in the healthcare, hotel, and catering sectors, as well as in the technology sector, are affected by demographic, political, and technological developments resulting in a greater need of skilled workers with a simultaneous shortage of skilled workers (CEDEFOP, 2015, 2016). Consequently, employers have to address workers who have not been taken into account such as low-skilled workers, workers returning from a career break, people with a migrant background, older people, and jobseekers and train them, in order to guarantee the professionalization of this workforce (Festing and Harsch, 2018). Continuing vocational education and training (CVET) is seen as an indispensable tool; because CVET has advantages for both employers and employees, it helps to increase the productivity of companies (Barrett and O’Connell, 2001), to prevent the widening of socioeconomic disparities (Dieckhoff, 2007), and to open up career opportunities for the workforce (Rubenson and Desjardins, 2009). However, participation rate on CVET seems to differ, depending on institutional factors (such as sector and size of the company) and individual characteristics (such as qualification level, migration background, age and time of absence from work) (e.g., Rubenson and Desjardins, 2009; Wiseman and Parry, 2017). In contrast to previous research, our study aims to provide a holistic view of reasons for and against CVET, combining the different perspectives of employers and (potential) employees. The analysis of reasons and barriers was carried out based on semi-structured interviews. Fifty-seven employers, 73 employees, and 42 jobseekers (potential employees) from the sectors retail, healthcare and social services, hotels and catering, and technology were interviewed. Results point to considerable differences in the reasons and barriers mentioned by the disadvantaged groups. These differences are particularly significant between employees on the one side and employers, as well as jobseekers, on the other side, while the reasons to attend CVET of jobseekers are more similar to those of employers. The results can be used to tailor CVET more closely to the needs of (potential) employees and thus strengthen both the qualification and career opportunities of (potential) employees and the competitiveness and productivity of companies.
Diversity and psychological health issues at the workplace are pressing issues in today’s organizations. However, research linking two fields is scant. To bridge this gap, drawing from team faultline research, social categorization theory, and the job-demands resources model, we propose that perceiving one’s team as fragmented into subgroups increases strain. We further argue that this relationship is mediated by task conflict and relationship conflict and that it is moderated by psychological empowerment and task interdependence. Multilevel structural equation models on a two-wave sample consisting of 536 participants from 107 work teams across various industries and work contexts partially supported the hypotheses: task conflict did indeed mediate the positive relationships between perceived subgroups and emotional exhaustion while relationship conflict did not; effects on stress symptoms were absent. Moreover, contrary to our expectations, neither empowerment, nor task interdependence moderated the mediation. Results indicate that team diversity can constitute a job demand that can affect psychological health. Focusing on the mediating role of task conflict, we offer a preliminary process model to guide future research at the crossroads of diversity and psychological health at work.
Security has become one of the primary factors that cloud customers consider when they select a cloud provider for migrating their data and applications into the Cloud. To this end, the Cloud Security Alliance (CSA) has provided the Consensus Assessment Questionnaire (CAIQ), which consists of a set of questions that providers should answer to document which security controls their cloud offerings support. In this paper, we adopted an empirical approach to investigate whether the CAIQ facilitates the comparison and ranking of the security offered by competitive cloud providers. We conducted an empirical study to investigate if comparing and ranking the security posture of a cloud provider based on CAIQ’s answers is feasible in practice. Since the study revealed that manually comparing and ranking cloud providers based on the CAIQ is too time-consuming, we designed an approach that semi-automates the selection of cloud providers based on CAIQ. The approach uses the providers’ answers to the CAIQ to assign a value to the different security capabilities of cloud providers. Tenants have to prioritize their security requirements. With that input, our approach uses an Analytical Hierarchy Process (AHP) to rank the providers’ security based on their capabilities and the tenants’ requirements. Our implementation shows that this approach is computationally feasible and once the providers’ answers to the CAIQ are assessed, they can be used for multiple CSP selections. To the best of our knowledge this is the first approach for cloud provider selection that provides a way to assess the security posture of a cloud provider in practice.
Nach den Ereignissen in Gaggenau liest und hört man allenthalben von den "rechtlichen Schwierigkeiten", die hinsichtlich Verboten von Redeauftritten ausländischer Politiker bestünden. In Gaggenau hat man originär sicherheitsrechtlich argumentiert: viel zu viele Leute, viel zu kleiner Parkplatz, Chaos vorprogrammiert. Ähnliches nun in Köln: zu großer Aufwand, zu kurzfristig, Chaos vorprogrammiert. Nehmen wir einmal an, es seien im Einzelfall tragfähige Begründungen gewesen. Dann stellt sich zugleich die Frage: Was kann man tun, wenn man den Auftritt eines ausländischen Vertreters untersagen will, obwohl der Parkplatz groß genug und die Polizei ausreichend gegen Ausschreitungen gewappnet ist? ...
Considering the circumstance that literature dealing with the economic performance of agri-food businesses in general, or particularly with the German agricultural sector, mainly deals with strictly agricultural-related theory in order to explain the economic success of agri-food businesses, the present paper aims to extend existing discourses to further areas of thought. Consequently, the characteristics: a) increased size of agribusiness, b) pull-strategies, c) the development of new markets and d) focus on the processing industry, that all correspond to the current picture of the German agricultural sector and are considered to be significantly responsible for recently managing to outpace the French agri-food sector, will be first explained in their success against the background of mainly non-agricultural-related literature. By doing so helpful and rather unnoted perspectives can be contributed to existing discourses. Second, the paper presents scatter plots which portray correlations between a) the added value of agriculture and the regular labor force, b) the added value of agriculture and the number of agricultural holdings and c) the added value of agriculture and the number of enterprises concerning milk consumption. Corresponding scatter plots which show different developments in Germany and France can be related to the findings of the first part of the paper and allow new perspectives in existing discourses as well.
Risk culture during the last 2000 years - from an aleatory society to the illusion of risk control
(2017)
The culture of risk is 2000 years old, although the term “risk” developed much later. The culture of merchants making decisions under uncertainty and taking the individual responsibility for the uncertain future started with the Roman “Aleatory Society”, continued with medieval sea merchants, who made business “ad risicum et fortunam”, and sustained to the culture of entrepreneurs in times of industrialisation and dynamic economic changes in the 18th and 19th century. For all long-term commercial relationships, the culture of honourable merchants with personal decision-making and individual responsibility worked well. The successful development of sciences, statistics and engineering within the last 100 years led to the conjecture that men can “construct” an economical system with a pre-defined “clockwork” behaviour. Since probability distributions could be calculated ex-post, an illusion to control risk ex-ante became a pattern in business and banking. Based on the recent experiences with the financial crisis, a “risk culture” should understand that human “Strength of Knowledge” is limited and the “unknown unknown” can materialise. As all decisions and all commercial agreements are made under uncertainty, the culture of honourable merchants is key to achieve trust in long-term economic relations with individual responsibility, flexibility to adapt and resilience against the unknown.
Die Wettbewerbsfähigkeit der deutschen Wirtschaft steht vor gewaltigen Herausforderungen. Traditionell starke Sektoren wie die Automobilindustrie oder der Maschinenbau befinden sich angesichts disruptiver Veränderungen durch neue Technologien, den Kampf gegen den Klimawandel und veränderte regulatorische Rahmenbedingungen in einer Umbruchphase. Zahlreiche Industriezweige wandeln sich durch den Einsatz von Künstlicher Intelligenz zu „Smart Industries“. Gleichzeitig gewinnt die Kompetenz in Querschnittstechnologien wie Cloud Computing oder Cyber Security an Bedeutung, da diese den effektiven Einsatz von Künstlicher Intelligenz erst ermöglichen. Eine Analyse der Wettbewerbsposition der deutschen Wirtschaft zeigt auf, dass in manchen Zukunftsfeldern ein erheblicher Nachholbedarf besteht.
Auf die Fragen kommt es an: "Woher kommt der Mensch? wo will er hin? – und warum um alles in der Welt ist er da nicht geblieben?" Der Meister zirkulärer Sinnsuche hat als Fragender seine beste Rolle gefunden und damit den postheroischen Typus Mensch erschaffen, der in der Vieldeutigkeit der Welt erleichtert seinen Unfrieden findet: damit, dass Pazifisten Kriege verteidigen, dass die Außerparlamentarischen eine Partei gründen, dass die Konservativen die interessanteren Zeitungen machen und die Komik zur wirksamsten Waffe gegen Dummheit und Schmerz geworden ist. Matthias Beltz hat beiläufig mit Lebensweisheiten und assoziativ aufgetürmtem Scharfsinn nicht nur seine Fragen bewaffnet, in denen gewagte Antworten ihren vitalen Keim austreiben, sondern auch das Misstrauen gesät gegenüber politisch korrekten, nachgeplapperten und smarten Antworten. ...
We investigate the default probability, recovery rates and loss distribution of a portfolio of securitised loans granted to Italian small and medium enterprises (SMEs). To this end, we use loan level data information provided by the European DataWarehouse platform and employ a logistic regression to estimate the company default probability. We include loan-level default probabilities and recovery rates to estimate the loss distribution of the underlying assets. We find that bank securitised loans are less risky, compared to the average bank lending to small and medium enterprises.
Advanced machine learning has achieved extraordinary success in recent years. “Active” operational risk beyond ex post analysis of measured-data machine learning could provide help beyond the regime of traditional statistical analysis when it comes to the “known unknown” or even the “unknown unknown.” While machine learning has been tested successfully in the regime of the “known,” heuristics typically provide better results for an active operational risk management (in the sense of forecasting). However, precursors in existing data can open a chance for machine learning to provide early warnings even for the regime of the “unknown unknown.”
A commentary on Commentary: Aesthetic Pleasure versus Aesthetic Interest: The Two Routes to Aesthetic Liking by Consoli, G. (2017). Front. Psychol. 8:1197. doi: 10.3389/fpsyg.2017.01197
In his commentary on the paper “Aesthetic Pleasure versus Aesthetic Interest: The Two Routes to Aesthetic Liking,” authored by Jan R. Landwehr and myself (Graf and Landwehr, 2017), Consoli (2017) deplores two aspects of our paper. First, an inadequate definition and operationalization of the key constructs aesthetic pleasure, aesthetic interest, and aesthetic liking, respectively aesthetic attractiveness. Second, the conclusions drawn from our empirical studies. While I acknowledge that one may have a different theoretical perspective on aesthetic perception and evaluation, it appears that Consoli's (2017) commentary does not even address the empirical data of our studies but only our theoretical assumptions and definitions. In the following, I will address Consoli's (2016, 2017) arguments in more detail, and I will corroborate our theoretical reasoning with the empirical data of our studies (Graf and Landwehr, 2017).....
Inhibitory interneurons govern virtually all computations in neocortical circuits and are in turn controlled by neuromodulation. While a detailed understanding of the distinct marker expression, physiology, and neuromodulator responses of different interneuron types exists for rodents and recent studies have highlighted the role of specific interneurons in converting rapid neuromodulatory signals into altered sensory processing during locomotion, attention, and associative learning, it remains little understood whether similar mechanisms exist in human neocortex. Here, we use whole-cell recordings combined with agonist application, transgenic mouse lines, in situ hybridization, and unbiased clustering to directly determine these features in human layer 1 interneurons (L1-INs). Our results indicate pronounced nicotinic recruitment of all L1-INs, whereas only a small subset co-expresses the ionotropic HTR3 receptor. In addition to human specializations, we observe two comparable physiologically and genetically distinct L1-IN types in both species, together indicating conserved rapid neuromodulation of human neocortical circuits through layer 1.