Refine
Year of publication
Document Type
- Article (23)
- Working Paper (7)
- Part of Periodical (2)
Language
- English (32) (remove)
Has Fulltext
- yes (32)
Is part of the Bibliography
- no (32)
Keywords
- Artificial Intelligence (2)
- COVID-19 news (2)
- Machine learning (2)
- Adoption (1)
- Advertisement disclosure (1)
- Advertising performance (1)
- Algorithmic Discrimination (1)
- Algorithmic transparency (1)
- Artificial intelligence (1)
- Batch Learning (1)
Institute
In current discussions on large language models (LLMs) such as GPT, understanding their ability to emulate facets of human intelligence stands central. Using behavioral economic paradigms and structural models, we investigate GPT’s cooperativeness in human interactions and assess its rational goal-oriented behavior. We discover that GPT cooperates more than humans and has overly optimistic expectations about human cooperation. Intriguingly, additional analyses reveal that GPT’s behavior isn’t random; it displays a level of goal-oriented rationality surpassing human counterparts. Our findings suggest that GPT hyper-rationally aims to maximize social welfare, coupled with a strive of self-preservation. Methodologically, our esearch highlights how structural models, typically employed to decipher human behavior, can illuminate the rationality and goal-orientation of LLMs. This opens a compelling path for future research into the intricate rationality of sophisticated, yet enigmatic artificial agents.
Chatbots become human(like): the influence of gender on cooperative interactions with chatbots
(2019)
CURRENT TECHNOLOGICAL ADVANCEMENTS OF CONVERSATIONAL AGENTS (CAs) PROMISE NEW POTENTIALS FOR HUMAN-COMPUTER COLLABORATIONS. YET, BOTH PRACTITIONERS AND RESEARCHERS FACE CHALLENGES IN DESIGNING THESE INFORMATION SYSTEMS, SUCH THAT CAs NOT ONLY INCREASE IN INTELLIGENCE BUT ALSO IN EFFECTIVENESS. THROUGH OUR RESEARCH ENDEAVOUR, WE PROVIDE NEW AND COUNTERINTUITIVE INSIGHTS THAT ARE CRUCIAL FOR THE EFFECTIVE DESIGN OF COOPERATIVE CAs.