TY - UNPD A1 - Klockmann, Victor A1 - Schenk, Alicia von A1 - Villeval, Marie-Claire T1 - Artificial intelligence, ethics, and intergenerational responsibility T2 - SAFE working paper ; No. 335 N2 - In more and more situations, artificially intelligent algorithms have to model humans’ (social) preferences on whose behalf they increasingly make decisions. They can learn these preferences through the repeated observation of human behavior in social encounters. In such a context, do individuals adjust the selfishness or prosociality of their behavior when it is common knowledge that their actions produce various externalities through the training of an algorithm? In an online experiment, we let participants’ choices in dictator games train an algorithm. Thereby, they create an externality on future decision making of an intelligent system that affects future participants. We show that individuals who are aware of the consequences of their training on the pay- offs of a future generation behave more prosocially, but only when they bear the risk of being harmed themselves by future algorithmic choices. In that case, the externality of artificially intelligence training induces a significantly higher share of egalitarian decisions in the present. T3 - SAFE working paper - 335 KW - Artificial Intelligence KW - Morality KW - Prosociality KW - Generations KW - Externalities Y1 - 2021 UR - http://publikationen.ub.uni-frankfurt.de/frontdoor/index/index/docId/63522 UR - https://nbn-resolving.org/urn:nbn:de:hebis:30:3-635224 UR - https://ssrn.com/abstract=4002578 N1 - Financial research support from the Leibniz Institute for Financial Research SAFE, the Goethe University Frankfurt, and the LABEX CORTEX (ANR- 11-LABX-0042) of Universite de Lyon, within the program Investissements Avenir (ANR-11-IDEX-007) operated by the French National Research Agency (ANR) is gratefully acknowledged. IS - December 21, 2021 PB - SAFE CY - Frankfurt am Main ER -