Refine
Document Type
- Working Paper (10)
- Article (2)
- Part of Periodical (2)
- Preprint (1)
Has Fulltext
- yes (15)
Is part of the Bibliography
- no (15)
Keywords
Institute
Der Einsatz von Künstliche Intelligenz (KI) – Technologien eröffnet viele Chancen, birgt aber auch viele Risiken – insbesondere in der Finanzbranche. Dieses Whitepaper gibt einen Überblick über den aktuellen Stand der Anwendung und Regulierung von KI-Technologien in der Finanzbranche, und diskutiert Chancen und Risiken von KI. KI findet in der Finanzbranche zahlreiche Anwendungsgebiete. Dazu gehören Chatbots, intelligente Assistenten für Kunden, automatischer Hochfrequenzhandel, automatisierte Betrugserkennung, Überwachung der Compliance, Gesichtserkennungssoftware zur Kundenidentifikation u. v. m. Auch Finanzaufsichtsbehörden setzen zunehmend KI-Anwendungen ein, um große und komplexe Datenmengen (Big Data) automatisiert und skalierbar auf Muster zu untersuchen und ihren Aufsichtspflichten nachzukommen.
Die Regulierung von KI in der Finanzbranche ist ein Balanceakt. Auf der einen Seite gibt es eine Notwendigkeit Flexibilität zu gewährleisten, um Innovationen nicht einzudämmen und im internationalen Wettbewerb nicht abgehängt zu werden. Strenge Auflagen können in diesem Zusammenhang als Barriere für die erfolgreiche Weiter-)Entwicklung von KI-Applikationen in der Finanzbranche wirken. Auf der anderen Seite müssen Persönlichkeitsrechte geschützt und Entscheidungsprozesse nachvollziehbar bleiben. Die fehlende Erklärbarkeit und Interpretierbarkeit von KI-Modellen entsteht in erster Linie durch Intransparenz bei einem Großteil heutiger KI-Anwendungen, bei welchen zwar die Natur der Ein- und Ausgaben beobachtbar und verständlich ist, nicht jedoch die genauen Verarbeitungsschritte dazwischen (Blackbox Prinzip).
Dieses Spannungsfeld zeigt sich auch im aktuellen regulatorischen Ansatz verschiedener Behörden. So werden einerseits die positiven Seiten von KI betont, wie Effizienz- und Effektivitätsgewinne sowie Rentabilitäts- und Qualitätssteigerungen (Bundesregierung, 2019) oder neue Methoden der Gefahrenanalyse in der Finanzmarktregulierung (BaFin, 2018a). Andererseits, wird darauf verwiesen, dass durch KI getroffene Entscheidungen immer von Menschen verantwortet werden müssen (EU Art. 22 DSGVO) und demokratische Rahmenbedingungen des Rechtsstaats zu wahren seien (FinTechRat, 2017).
Für die Zukunft sehen wir die Notwendigkeit internationale Regularien prinzipienbasiert, vereinheitlicht und technologieneutral weiterzuentwickeln, ohne dabei die Entwicklung neuer KIbasierter Geschäftsmodelle zu bremsen. Im globalen Wettstreit sollte Europa bei der Regulierung des KI-Einsatzes eine Vorreiterrolle einnehmen und damit seine demokratischen Werte der digitalen Freiheit, Selbstbestimmung und das Recht auf Information weltweit exportieren. Förderprogramme sollten einen stärkeren Fokus auf die Entwicklung nachhaltiger und verantwortungsvoller KI in Banken legen. Dazu zählt insbesondere die (Weiter-)Entwicklung breit einsetzbarer Methoden, die es erlauben, menschen-interpretierbare Erklärungen für erzeugte Ausgaben bereitzustellen und Problemen wie dem Blackbox Prinzip entgegenzuwirken.
Aus Sicht der Unternehmen in der Finanzbranche könnte eine Kooperation mit BigTech-Unternehmen sinnvoll sein, um gemeinsam das Potential der Technologie bestmöglich ausschöpfen zu können. Nützlich wäre auch ein gemeinsames semantisches Metadatenmodell zur Beschreibung der in der Finanzbranche anfallenden Daten. In Zukunft könnten künstliche Intelligenzen Daten aus sozialen Netzwerken berücksichtigen oder Smart Contracts aushandeln. Eine der größten Herausforderungen der Zukunft wird das Anwerben geeigneten Personals darstellen.
Using a novel experimental design, I test how the exposure to information about a group’s relative performance causally affects the members’ level of identification and thereby their propensity to harm affiliates of comparison groups. I find that both, being informed about a high and poor relative performance of the ingroup similarly fosters identification. Stronger ingroup identification creates increased hostility against the group of comparison. In cases where participants learn about poor relative performance, there appears to be a direct level effect additionally elevating hostile discrimination. My findings shed light on a specific channel through which social media may contribute to intergroup fragmentation and polarization.
Using experimental data from a comprehensive field study, we explore the causal effects of algorithmic discrimination on economic efficiency and social welfare. We harness economic, game-theoretic, and state-of-the-art machine learning concepts allowing us to overcome the central challenge of missing counterfactuals, which generally impedes assessing economic downstream consequences of algorithmic discrimination. This way, we are able to precisely quantify downstream efficiency and welfare ramifications, which provides us a unique opportunity to assess whether the introduction of an AI system is actually desirable. Our results highlight that AI systems’ capabilities in enhancing welfare critically depends on the degree of inherent algorithmic biases. While an unbiased system in our setting outperforms humans and creates substantial welfare gains, the positive impact steadily decreases and ultimately reverses the more biased an AI system becomes. We show that this relation is particularly concerning in selective-labels environments, i.e., settings where outcomes are only observed if decision-makers take a particular action so that the data is selectively labeled, because commonly used technical performance metrics like the precision measure are prone to be deceptive. Finally, our results depict that continued learning, by creating feedback loops, can remedy algorithmic discrimination and associated negative effects over time.
Business practitioners increasingly use Artificial Intelligence (AI) applications to assist customers in making decisions due to their higher prediction quality. Yet, customers are frequently reluctant to rely on advice generated from machines, especially when their decision is at stake. Our study proposes a solution, which is to bring a human expert in the loop of machine advice. We empirically test whether customers are more accepting expert-AI collaborative advice than expert or AI advice.
Recent regulatory measures such as the European Union’s AI Act re-quire artificial intelligence (AI) systems to be explainable. As such, under-standing how explainability impacts human-AI interaction and pinpoint-ing the specific circumstances and groups affected, is imperative. In this study, we devise a formal framework and conduct an empirical investiga-tion involving real estate agents to explore the complex interplay between explainability of and delegation to AI systems. On an aggregate level, our findings indicate that real estate agents display a higher propensity to delegate apartment evaluations to an AI system when its workings are explainable, thereby surrendering control to the machine. However, at an individual level, we detect considerable heterogeneity. Agents possess-ing extensive domain knowledge are generally more inclined to delegate decisions to AI and minimize their effort when provided with explana-tions. Conversely, agents with limited domain knowledge only exhibit this behavior when explanations correspond with their preconceived no-tions regarding the relationship between apartment features and listing prices. Our results illustrate that the introduction of explainability in AI systems may transfer the decision-making control from humans to AI under the veil of transparency, which has notable implications for policy makers and practitioners that we discuss.
With free delivery of products virtually being a standard in E-commerce, product returns pose a major challenge for online retailers and society. For retailers, product returns involve significant transportation, labor, disposal, and administrative costs. From a societal perspective, product returns contribute to greenhouse gas emissions and packaging disposal and are often a waste of natural resources. Therefore, reducing product returns has become a key challenge. This paper develops and validates a novel smart green nudging approach to tackle the problem of product returns during customers’ online shopping processes. We combine a green nudge with a novel data enrichment strategy and a modern causal machine learning method. We first run a large-scale randomized field experiment in the online shop of a German fashion retailer to test the efficacy of a novel green nudge. Subsequently, we fuse the data from about 50,000 customers with publicly-available aggregate data to create what we call enriched digital footprints and train a causal machine learning system capable of optimizing the administration of the green nudge. We report two main findings: First, our field study shows that the large-scale deployment of a simple, low-cost green nudge can significantly reduce product returns while increasing retailer profits. Second, we show how a causal machine learning system trained on the enriched digital footprint can amplify the effectiveness of the green nudge by “smartly” administering it only to certain types of customers. Overall, this paper demonstrates how combining a low-cost marketing instrument, a privacy-preserving data enrichment strategy, and a causal machine learning method can create a win-win situation from both an environmental and economic perspective by simultaneously reducing product returns and increasing retailers’ profits.
This paper explores the interplay of feature-based explainable AI (XAI) tech- niques, information processing, and human beliefs. Using a novel experimental protocol, we study the impact of providing users with explanations about how an AI system weighs inputted information to produce individual predictions (LIME) on users’ weighting of information and beliefs about the task-relevance of information. On the one hand, we find that feature-based explanations cause users to alter their mental weighting of available information according to observed explanations. On the other hand, explanations lead to asymmetric belief adjustments that we inter- pret as a manifestation of the confirmation bias. Trust in the prediction accuracy plays an important moderating role for XAI-enabled belief adjustments. Our results show that feature-based XAI does not only superficially influence decisions but re- ally change internal cognitive processes, bearing the potential to manipulate human beliefs and reinforce stereotypes. Hence, the current regulatory efforts that aim at enhancing algorithmic transparency may benefit from going hand in hand with measures ensuring the exclusion of sensitive personal information in XAI systems. Overall, our findings put assertions that XAI is the silver bullet solving all of AI systems’ (black box) problems into perspective.
Advances in Machine Learning (ML) led organizations to increasingly implement predictive decision aids intended to improve employees’ decision-making performance. While such systems improve organizational efficiency in many contexts, they might be a double-edged sword when there is the danger of a system discontinuance. Following cognitive theories, the provision of ML-based predictions can adversely affect the development of decision-making skills that come to light when people lose access to the system. The purpose of this study is to put this assertion to the test. Using a novel experiment specifically tailored to deal with organizational obstacles and endogeneity concerns, we show that the initial provision of ML decision aids can latently prevent the development of decision-making skills which later becomes apparent when the system gets discontinued. We also find that the degree to which individuals 'blindly' trust observed predictions determines the ultimate performance drop in the post-discontinuance phase. Our results suggest that making it clear to people that ML decision aids are imperfect can have its benefits especially if there is a reasonable danger of (temporary) system discontinuances.