Refine
Year of publication
Document Type
- Article (24)
- Working Paper (8)
- Part of Periodical (4)
Has Fulltext
- yes (36) (remove)
Is part of the Bibliography
- no (36)
Keywords
- Artificial Intelligence (3)
- Bitcoin (2)
- COVID-19 news (2)
- Comments disabled (2)
- Cryptocurrency (2)
- Financial Institutions (2)
- Machine learning (2)
- Adoption (1)
- Advertisement disclosure (1)
- Advertising performance (1)
Institute
This paper explores the interplay of feature-based explainable AI (XAI) tech- niques, information processing, and human beliefs. Using a novel experimental protocol, we study the impact of providing users with explanations about how an AI system weighs inputted information to produce individual predictions (LIME) on users’ weighting of information and beliefs about the task-relevance of information. On the one hand, we find that feature-based explanations cause users to alter their mental weighting of available information according to observed explanations. On the other hand, explanations lead to asymmetric belief adjustments that we inter- pret as a manifestation of the confirmation bias. Trust in the prediction accuracy plays an important moderating role for XAI-enabled belief adjustments. Our results show that feature-based XAI does not only superficially influence decisions but re- ally change internal cognitive processes, bearing the potential to manipulate human beliefs and reinforce stereotypes. Hence, the current regulatory efforts that aim at enhancing algorithmic transparency may benefit from going hand in hand with measures ensuring the exclusion of sensitive personal information in XAI systems. Overall, our findings put assertions that XAI is the silver bullet solving all of AI systems’ (black box) problems into perspective.
THE PROLIFERATION OF THE INTERNET HAS ENABLED PLATFORM INTERMEDIARIES TO CREATE TWO-SIDED MARKETS IN MANY INDUSTRIES. IN SUCH MARKETS, NETWORK EFFECTS OFTEN OCCUR WHICH CAN DIFFER FOR NEW AND EXISTING CUSTOMERS. THE AUTHORS DEVELOP AN INFLUX-OUTFLOW MODEL TO INVESTIGATE THE CONDITIONS UNDER WHICH THE ESTIMATION OF SAME-SIDE AND CROSS-SIDE NETWORK EFFECTS SHOULD DISTINGUISH BETWEEN ITS IMPACT ON THE NUMBER OF NEW CUSTOMERS (I.E., ACQUISITION) AND EXISTING CUSTOMERS (I.E., THEIR ACTIVITY).
In current discussions on large language models (LLMs) such as GPT, understanding their ability to emulate facets of human intelligence stands central. Using behavioral economic paradigms and structural models, we investigate GPT’s cooperativeness in human interactions and assess its rational goal-oriented behavior. We discover that GPT cooperates more than humans and has overly optimistic expectations about human cooperation. Intriguingly, additional analyses reveal that GPT’s behavior isn’t random; it displays a level of goal-oriented rationality surpassing human counterparts. Our findings suggest that GPT hyper-rationally aims to maximize social welfare, coupled with a strive of self-preservation. Methodologically, our esearch highlights how structural models, typically employed to decipher human behavior, can illuminate the rationality and goal-orientation of LLMs. This opens a compelling path for future research into the intricate rationality of sophisticated, yet enigmatic artificial agents.
Chatbots become human(like): the influence of gender on cooperative interactions with chatbots
(2019)
CURRENT TECHNOLOGICAL ADVANCEMENTS OF CONVERSATIONAL AGENTS (CAs) PROMISE NEW POTENTIALS FOR HUMAN-COMPUTER COLLABORATIONS. YET, BOTH PRACTITIONERS AND RESEARCHERS FACE CHALLENGES IN DESIGNING THESE INFORMATION SYSTEMS, SUCH THAT CAs NOT ONLY INCREASE IN INTELLIGENCE BUT ALSO IN EFFECTIVENESS. THROUGH OUR RESEARCH ENDEAVOUR, WE PROVIDE NEW AND COUNTERINTUITIVE INSIGHTS THAT ARE CRUCIAL FOR THE EFFECTIVE DESIGN OF COOPERATIVE CAs.