Institutes
Refine
Year of publication
- 2021 (41) (remove)
Document Type
- Article (33)
- Book (4)
- Bachelor Thesis (1)
- Report (1)
- Review (1)
- Working Paper (1)
Has Fulltext
- yes (41)
Is part of the Bibliography
- no (41)
Keywords
- Artificial intelligence (3)
- Machine learning (2)
- AI fairness (1)
- Adoption (1)
- Agent-based modeling (1)
- Aging (1)
- Algorithmic fairness (1)
- Applied immunology (1)
- Asymmetric response (1)
- Augmented reality (1)
Institute
- Wirtschaftswissenschaften (41)
- Präsidium (4)
- House of Finance (HoF) (2)
- Sustainable Architecture for Finance in Europe (SAFE) (2)
- Biochemie, Chemie und Pharmazie (1)
- Center for Financial Studies (CFS) (1)
- Gesellschaftswissenschaften (1)
- Informatik und Mathematik (1)
- Institute for Monetary and Financial Stability (IMFS) (1)
- Rechtswissenschaft (1)
Having a gatekeeper position in a collaborative network offers firms great potential to gain competitive advantages. However, it is not well understood what kind of collaborations are associated with such a position. Conceptually grounded in social network theory, this study draws on the resource-based view and the relational factors view to investigate which types of collaboration characterize firms that are in a gatekeeper position, which ultimately could improve firm performance in subsequent periods. The empirical analysis utilizes a unique longitudinal data set to examine dynamic network formation. We used a data crawling approach to reconstruct collaboration networks among the 500 largest companies in Germany over nine years and matched these networks with performance data. The results indicate that firms in gatekeeper positions often engage in medium-intensity collaborations and less likely weak-intensity collaborations. Strong-intensity collaborations are not related to the likelihood of being a gatekeeper. Our study further reveals that a firm's knowledge base is an important moderator and that this knowledge base can increase the benefits of having a gatekeeper position in terms of firm performance.
The current economic landscape is complex and globalized, and it imposes on individuals the responsibility for their own financial security. This situation has been intensified by the COVID-19 crisis, since short-time work and layoffs significantly limit the availability of financial resources for individuals. Due to the long duration of the lockdown, these challenges will have a long-term impact and affect the financial well-being of many citizens. Moreover, it can be assumed that the consequences of this crisis will once again particularly affect groups of people who have already frequently been identified as having low financial literacy. Financial literacy is therefore an important target for educational measures and interventions. However, it cannot be considered in isolation but must take into account the many potential factors that influence financial literacy alone or in combination. These include personality traits and socio-demographic factors as well as the (in)ability to defer gratification. Against this background, individualized support offers can be made. With this in mind, in the first step of this study, we analyze the complex interaction of personality traits, socio-demographic factors, the (in-)ability to delay gratification, and financial literacy. In the second step, we differentiate the identified effects regarding different groups to identify moderating effects, which, in turn, allow conclusions to be drawn about the need for individualized interventions. The results show that gender and educational background moderate the effects occurring between self-reported financial literacy, financial learning opportunities, delay of gratification, and financial literacy.
The health and genetic data of deceased people are a particularly important asset in the field of biomedical research. However, in practice, using them is compli- cated, as the legal framework that should regulate their use has not been fully developed yet. The General Data Protection Regulation (GDPR) is not applicable to such data and the Member States have not been able to agree on an alternative regulation. Recently, normative models have been proposed in an attempt to face this issue. The most well- known of these is posthumous medical data donation (PMDD). This proposal supports an opt-in donation system of health data for research purposes. In this article, we argue that PMDD is not a useful model for addressing the issue at hand, as it does not consider that some of these data (the genetic data) may be the personal data of the living relatives of the deceased. Furthermore, we find the reasons supporting an opt-in model less convincing than those that vouch for alternative systems. Indeed, we propose a normative framework that is based on the opt-out system for non-personal data combined with the application of the GDPR to the relatives’ personal data.
Vulnerability comes, according to Orio Giarini, with two risks: human-made risks, also called entrepreneurial risks, and natural or pure risks such as accidents and earthquakes. Both types of risk are growing in dimension and are increasingly interrelated. To control the vulnerability, sophisticated insurance products are called for. Here, mutual insurance is relevant, in particular when risks are large, probabilities uncertain or unknown, and events interrelated or correlated. In this paper the following three examples are discussed and the advantages of mutual insurance are shown: unknown probabilities connected with unforeseeable events, correlated risks and macroeconomic or demographic risks.
The quality of life: protecting non-personal interests and non-personal data in the age of big data
(2021)
Under the current legal paradigm, the rights to privacy and data protection provide natural persons with subjective rights to protect their private interests, such as related to human dignity, individual autonomy and personal freedom. In principle, when data processing is based on non-personal or aggregated data or when such data pro- cesses have an impact on societal, rather than individual interests, citizens cannot rely on these rights. Although this legal paradigm has worked well for decades, it is increasingly put under pressure because Big Data processes are typically based indis- criminate rather than targeted data collection, because the high volumes of data are processed on an aggregated rather than a personal level and because the policies and decisions based on the statistical correlations found through algorithmic analytics are mostly addressed at large groups or society as a whole rather than specific individuals. This means that large parts of the data-driven environment are currently left unregu- lated and that individuals are often unable to rely on their fundamental rights when addressing the more systemic effects of Big Data processes. This article will discuss how this tension might be relieved by turning to the notion ‘quality of life’, which has the potential of becoming the new standard for the European Court of Human Rights (ECtHR) when dealing with privacy related cases.
Digital wealth and its necessary regulation have gained prominence in recent years. The European Commission has published several documents and policy proposals relating, directly or indirectly, to the data economy. A data economy can be defined as an ecosystem of different types of market players collaborating to ensure that data is accessible and usable in order to extract value from data through, for example, creating a variety of applications with great potential to improve daily life. The value of data can increase from EUR 257 billion (1.85 of EU Gross Domestic Product (GDP)) to EUR 643 billion by 2020 (3.17% of EU GDP), according to the EU Commission. The legal implications of the increasing value of the data economy are clear; hence the need to address the challenges presented by its legal regulation.
The mobile games business is an ever-increasing sub-sector of the entertainment industry. Due to its high profitability but also high risk and competitive atmosphere, game publishers need to develop strategies that allow them to release new products at a high rate, but without compromising the already short lifespan of the firms' existing games. Successful game publishers must enlarge their user base by continually releasing new and entertaining games, while simultaneously motivating the current user base of existing games to remain active for more extended periods. Since the core-component reuse strategy has proven successful in other software products, this study investigates the advantages and drawbacks of this strategy in mobile games. Drawing on the widely accepted Product Life Cycle concept, the study investigates whether the introduction of a new mobile game built with core-components of an existing mobile game curtails the incumbent's product life cycle. Based on real and granular data on the gaming activity of a popular mobile game, the authors find that by promoting multi-homing (i.e., by smartly interlinking the incumbent and new product with each other so that users start consuming both games in parallel), the core-component reuse strategy can prolong the lifespan of the incumbent game.
Contemporary information systems make widespread use of artificial intelligence (AI). While AI offers various benefits, it can also be subject to systematic errors, whereby people from certain groups (defined by gender, age, or other sensitive attributes) experience disparate outcomes. In many AI applications, disparate outcomes confront businesses and organizations with legal and reputational risks. To address these, technologies for so-called “AI fairness” have been developed, by which AI is adapted such that mathematical constraints for fairness are fulfilled. However, the financial costs of AI fairness are unclear. Therefore, the authors develop AI fairness for a real-world use case from e-commerce, where coupons are allocated according to clickstream sessions. In their setting, the authors find that AI fairness successfully manages to adhere to fairness requirements, while reducing the overall prediction performance only slightly. However, they find that AI fairness also results in an increase in financial cost. Thus, in this way the paper’s findings contribute to designing information systems on the basis of AI fairness.
This paper uses historical monthly temperature level data for a panel of 114 countries to identify the effects of within year temperature level variability on productivity growth in five different macro regions, i.e., (1) Africa, (2) Asia, (3) Europe, (4) North America and (5) South America. We find two primary results. First, higher intra-annual temperature variability reduces (increases) productivity in Europe and North America (Asia). Second, higher intra-annual temperature variability has no significant effects on productivity in Africa and South America. Additional empirical tests indicate also the following: (1) rising intra-annual temperature variability reduces productivity (even thought less significantly)in both tropical and non-tropical regions, (2) inter-annual temperature variability reduces (increases) productivity in North America (Europe) and (3) winter and summer inter-annual temperature variability generates a drop in productivity in both Europe and North America. Taken together, these findings indicate that temperature variability shocks tend to have stronger adverse economic effects among richer economies. In a production economy featuring long-run productivity and temperature volatility shocks, we quantify these negative impacts and find welfare losses of 2.9% (1%) in Europe (North America).
The aim of this study was to identify and evaluate different de-identification techniques that may be used in several mobility-related use cases. To do so, four use cases have been defined in accordance with a project partner that focused on the legal aspects of this project, as well as with the VDA/FAT working group. Each use case aims to create different legal and technical issues with regards to the data and information that are to be gathered, used and transferred in the specific scenario. Use cases should therefore differ in the type and frequency of data that is gathered as well as the level of privacy and the speed of computation that is needed for the data. Upon identifying use cases, a systematic literature review has been performed to identify suitable de-identification techniques to provide data privacy. Additionally, external databases have been considered as data that is expected to be anonymous might be reidentified through the combination of existing data with such external data.
For each case, requirements and possible attack scenarios were created to illustrate where exactly privacy-related issues could occur and how exactly such issues could impact data subjects, data processors or data controllers. Suitable de-identification techniques should be able to withstand these attack scenarios. Based on a series of additional criteria, de-identification techniques are then analyzed for each use case. Possible solutions are then discussed individually in chapters 6.1 - 6.2. It is evident that no one-size-fits-all approach to protect privacy in the mobility domain exists. While all techniques that are analyzed in detail in this report, e.g., homomorphic encryption, differential privacy, secure multiparty computation and federated learning, are able to successfully protect user privacy in certain instances, their overall effectiveness differs depending on the specifics of each use case.