TY - UNPD A1 - Bauer, Kevin A1 - Liebich, Lena A1 - Hinz, Oliver A1 - Kosfeld, Michael T1 - Decoding GPT’s hidden 'rationality' of cooperation N2 - In current discussions on large language models (LLMs) such as GPT, understanding their ability to emulate facets of human intelligence stands central. Using behavioral economic paradigms and structural models, we investigate GPT’s cooperativeness in human interactions and assess its rational goal-oriented behavior. We discover that GPT cooperates more than humans and has overly optimistic expectations about human cooperation. Intriguingly, additional analyses reveal that GPT’s behavior isn’t random; it displays a level of goal-oriented rationality surpassing human counterparts. Our findings suggest that GPT hyper-rationally aims to maximize social welfare, coupled with a strive of self-preservation. Methodologically, our esearch highlights how structural models, typically employed to decipher human behavior, can illuminate the rationality and goal-orientation of LLMs. This opens a compelling path for future research into the intricate rationality of sophisticated, yet enigmatic artificial agents. T3 - SAFE working paper - 401 KW - large language models KW - cooperation KW - goal orientation KW - economic rationality Y1 - 2023 UR - http://publikationen.ub.uni-frankfurt.de/frontdoor/index/index/docId/71519 UR - https://nbn-resolving.org/urn:nbn:de:hebis:30:3-715196 UR - https://ssrn.com/abstract=4576036 N1 - We gratefully acknowledge research support from the University of Mannheim, the Leibniz Institute for Financial Research SAFE, and the Goethe University Frankfurt. PB - SAFE CY - Frankfurt am Main ER -