Refine
Document Type
- Article (4)
- Working Paper (1)
Language
- English (5)
Has Fulltext
- yes (5)
Is part of the Bibliography
- no (5)
Keywords
- Machine learning (2)
- Algorithmic Discrimination (1)
- Artificial Intelligence (1)
- Artificial intelligence (1)
- Batch Learning (1)
- Economics (1)
- Experts (1)
- Feedback loop (1)
- Game Theory (1)
- Information systems (1)
Using experimental data from a comprehensive field study, we explore the causal effects of algorithmic discrimination on economic efficiency and social welfare. We harness economic, game-theoretic, and state-of-the-art machine learning concepts allowing us to overcome the central challenge of missing counterfactuals, which generally impedes assessing economic downstream consequences of algorithmic discrimination. This way, we are able to precisely quantify downstream efficiency and welfare ramifications, which provides us a unique opportunity to assess whether the introduction of an AI system is actually desirable. Our results highlight that AI systems’ capabilities in enhancing welfare critically depends on the degree of inherent algorithmic biases. While an unbiased system in our setting outperforms humans and creates substantial welfare gains, the positive impact steadily decreases and ultimately reverses the more biased an AI system becomes. We show that this relation is particularly concerning in selective-labels environments, i.e., settings where outcomes are only observed if decision-makers take a particular action so that the data is selectively labeled, because commonly used technical performance metrics like the precision measure are prone to be deceptive. Finally, our results depict that continued learning, by creating feedback loops, can remedy algorithmic discrimination and associated negative effects over time.
This article discusses the counterpart of interactive machine learning, i.e., human learning while being in the loop in a human-machine collaboration. For such cases we propose the use of a Contradiction Matrix to assess the overlap and the contradictions of human and machine predictions. We show in a small-scaled user study with experts in the area of pneumology (1) that machine-learning based systems can classify X-rays with respect to diseases with a meaningful accuracy, (2) humans partly use contradictions to reconsider their initial diagnosis, and (3) that this leads to a higher overlap between human and machine diagnoses at the end of the collaboration situation. We argue that disclosure of information on diagnosis uncertainty can be beneficial to make the human expert reconsider her or his initial assessment which may ultimately result in a deliberate agreement. In the light of the observations from our project, it becomes apparent that collaborative learning in such a human-in-the-loop scenario could lead to mutual benefits for both human learning and interactive machine learning. Bearing the differences in reasoning and learning processes of humans and intelligent systems in mind, we argue that interdisciplinary research teams have the best chances at tackling this undertaking and generating valuable insights.
COVID-19 HAS AGAIN TIGHTENED ITS GRIP AROUND THE WORLD AND THE HEALTH SYSTEM. THIS ARTICLE GIVES AN INTRODUCTION TO EXPLAINABLE INTERACTIVE MACHINE LEARNING AND PROVIDES INSIGHTS ON HOW THIS METHOD MAY NOT ONLY HELP IN ENGINEERING MORE POWERFUL AI SYSTEMS, BUT ALSO HOW IT MAY HELP TO EASE THE BURDEN OF VIRAL STRAINS ON THE HEALTHCARE SYSTEM.
Chatbots become human(like): the influence of gender on cooperative interactions with chatbots
(2019)
CURRENT TECHNOLOGICAL ADVANCEMENTS OF CONVERSATIONAL AGENTS (CAs) PROMISE NEW POTENTIALS FOR HUMAN-COMPUTER COLLABORATIONS. YET, BOTH PRACTITIONERS AND RESEARCHERS FACE CHALLENGES IN DESIGNING THESE INFORMATION SYSTEMS, SUCH THAT CAs NOT ONLY INCREASE IN INTELLIGENCE BUT ALSO IN EFFECTIVENESS. THROUGH OUR RESEARCH ENDEAVOUR, WE PROVIDE NEW AND COUNTERINTUITIVE INSIGHTS THAT ARE CRUCIAL FOR THE EFFECTIVE DESIGN OF COOPERATIVE CAs.
Artificial Intelligence (AI) and Machine Learning (ML) are currently hot topics in industry and business practice, while management-oriented research disciplines seem reluctant to adopt these sophisticated data analytics methods as research instruments. Even the Information Systems (IS) discipline with its close connections to Computer Science seems to be conservative when conducting empirical research endeavors. To assess the magnitude of the problem and to understand its causes, we conducted a bibliographic review on publications in high-level IS journals. We reviewed 1,838 articles that matched corresponding keyword-queries in journals from the AIS senior scholar basket, Electronic Markets and Decision Support Systems (Ranked B). In addition, we conducted a survey among IS researchers (N = 110). Based on the findings from our sample we evaluate different potential causes that could explain why ML methods are rather underrepresented in top-tier journals and discuss how the IS discipline could successfully incorporate ML methods in research undertakings.