TY - UNPD A1 - Klockmann, Victor A1 - Schenk, Alicia von A1 - Villeval, Marie-Claire T1 - Artificial intelligence, ethics, and diffused pivotality T2 - SAFE working paper ; No. 336 N2 - With Big Data, decisions made by machine learning algorithms depend on training data generated by many individuals. In an experiment, we identify the effect of varying individual responsibility for the moral choices of an artificially intelligent algorithm. Across treatments, we manipulated the sources of training data and thus the impact of each individual’s decisions on the algorithm. Diffusing such individual pivotality for algorithmic choices increased the share of selfish decisions and weakened revealed prosocial preferences. This does not result from a change in the structure of incentives. Rather, our results show that Big Data offers an excuse for selfish behavior through lower responsibility for one’s and others’ fate. T3 - SAFE working paper - 336 KW - Artificial Intelligence KW - Big Data KW - Pivotality KW - Ethics KW - Experiment Y1 - 2021 UR - http://publikationen.ub.uni-frankfurt.de/frontdoor/index/index/docId/64016 UR - https://nbn-resolving.org/urn:nbn:de:hebis:30:3-640162 UR - https://ssrn.com/abstract=4003065 N1 - Financial research support from the Leibniz Institute for Financial Research SAFE, the Goethe University Frankfurt, and the LABEX CORTEX (ANR-11-LABX-0042) of Universite de Lyon, within the program Investissements Avenir (ANR-11-IDEX-007) operated by the French National Research Agency (ANR) is gratefully acknowledged. IS - December 21, 2021 PB - SAFE CY - Frankfurt am Main ER -