Refine
Document Type
- Working Paper (5)
- Part of Periodical (3)
- Article (2)
- Doctoral Thesis (1)
Has Fulltext
- yes (11)
Is part of the Bibliography
- no (11)
Keywords
- Artificial Intelligence (11) (remove)
Institute
- Sustainable Architecture for Finance in Europe (SAFE) (7)
- Wirtschaftswissenschaften (7)
- Center for Financial Studies (CFS) (4)
- House of Finance (HoF) (3)
- Biowissenschaften (1)
- Institute for Monetary and Financial Stability (IMFS) (1)
- Medizin (1)
- Physik (1)
- Rechtswissenschaft (1)
- Senckenbergische Naturforschende Gesellschaft (1)
Using experimental data from a comprehensive field study, we explore the causal effects of algorithmic discrimination on economic efficiency and social welfare. We harness economic, game-theoretic, and state-of-the-art machine learning concepts allowing us to overcome the central challenge of missing counterfactuals, which generally impedes assessing economic downstream consequences of algorithmic discrimination. This way, we are able to precisely quantify downstream efficiency and welfare ramifications, which provides us a unique opportunity to assess whether the introduction of an AI system is actually desirable. Our results highlight that AI systems’ capabilities in enhancing welfare critically depends on the degree of inherent algorithmic biases. While an unbiased system in our setting outperforms humans and creates substantial welfare gains, the positive impact steadily decreases and ultimately reverses the more biased an AI system becomes. We show that this relation is particularly concerning in selective-labels environments, i.e., settings where outcomes are only observed if decision-makers take a particular action so that the data is selectively labeled, because commonly used technical performance metrics like the precision measure are prone to be deceptive. Finally, our results depict that continued learning, by creating feedback loops, can remedy algorithmic discrimination and associated negative effects over time.
Central bank intervention in the form of quantitative easing (QE) during times of low interest rates is a controversial topic. The author introduces a novel approach to study the effectiveness of such unconventional measures. Using U.S. data on six key financial and macroeconomic variables between 1990 and 2015, the economy is estimated by artificial neural networks. Historical counterfactual analyses show that real effects are less pronounced than yield effects.
Disentangling the effects of the individual asset purchase programs, impulse response functions provide evidence for QE being less effective the more the crisis is overcome. The peak effects of all QE interventions during the Financial Crisis only amounts to 1.3 pp for GDP growth and 0.6 pp for inflation respectively. Hence, the time as well as the volume of the interventions should be deliberated.
The EU Commission proposed a regulation on artificial intelligence (AI) on 21 April 2021, which categorizes the use of AI in “social credit” as a prohibited application. This paper examines the definition and structure of the Social Credit System in China, which comprises various systems operating at different levels and sectors. The analysis focuses on two main subsystems: the database and one-stop inquiry platform for financial credit records, and the social governance tool designed to facilitate legal and political compliance. The development of the commercial customer credit reference is also explored. This paper further discusses the impacts and concerns associated with the implementation of the Chinese social credit system to raise awareness. The objective is to offer insights from the existing system and contribute to the ongoing discussion on regulating AI applications in social credit within the EU.
In more and more situations, artificially intelligent algorithms have to model humans’ (social) preferences on whose behalf they increasingly make decisions. They can learn these preferences through the repeated observation of human behavior in social encounters. In such a context, do individuals adjust the selfishness or prosociality of their behavior when it is common knowledge that their actions produce various externalities through the training of an algorithm? In an online experiment, we let participants’ choices in dictator games train an algorithm. Thereby, they create an externality on future decision making of an intelligent system that affects future participants. We show that individuals who are aware of the consequences of their training on the pay- offs of a future generation behave more prosocially, but only when they bear the risk of being harmed themselves by future algorithmic choices. In that case, the externality of artificially intelligence training induces a significantly higher share of egalitarian decisions in the present.
With Big Data, decisions made by machine learning algorithms depend on training data generated by many individuals. In an experiment, we identify the effect of varying individual responsibility for the moral choices of an artificially intelligent algorithm. Across treatments, we manipulated the sources of training data and thus the impact of each individual’s decisions on the algorithm. Diffusing such individual pivotality for algorithmic choices increased the share of selfish decisions and weakened revealed prosocial preferences. This does not result from a change in the structure of incentives. Rather, our results show that Big Data offers an excuse for selfish behavior through lower responsibility for one’s and others’ fate.
Artificial intelligence in heavy-ion collisions : bridging the gap between theory and experiments
(2023)
Artificial Intelligence (AI) methods are employed to study heavy-ion collisions at intermediate collision energies, where high baryon density and moderate temperature QCD matter is produced. The experimental measurements of various conventional observables such as collective flow, particle number fluctuations, etc. are usually compared with expensive model calculations to infer the physics governing the evolution of the matter produced in the collisions. Various experimental effects and processing algorithms can greatly affect the sensitivity of these observables. AI methods are used to bridge this gap between theory and experiments of heavy-ion collisions. The problems with conventional methods of analyzing experimental data are illustrated in a comparative study of the Glauber MC model and the UrQMD transport model. It is found that the centrality determination and the estimated fluctuations of the number of participant nucleons suffer from strong model dependencies for Au-Au collisions at 1.23 AGeV. This can bias the results of the experimental analysis if the number of participant nucleons used is not consistent throughout the analysis and in the final model-to-data comparison. The measurable consequences of this model dependence of the number of participant nucleons are also discussed. In this context, PointNet-based AI models are developed to accurately reconstruct the impact parameter or the number of participant nucleons in a collision event from the hits and/or reconstructed track of particles in 10 AGeV Au-Au collisions at the CBM experiment. In the last part of the thesis, different AI methods to study the equation of state (EoS) at high baryon densities are discussed. First, a Bayesian inference is performed to constrain the density dependence of the EoS from the available experimental measurements of elliptical flow and mean transverse kinetic energy of mid rapidity protons in intermediate energy collisions. The UrQMD model was augmented to include arbitrary potentials (or equivalently the EoSs) in the QMD part to provide a consistent treatment of the EoS throughout the evolution of the system. The experimental data constrain the posterior constructed for the EoS for densities up to four times saturation density. However, beyond three times saturation density, the shape of the posterior depends on the choice of observables used. There is a tension in the measurements at a collision energy of about 4 GeV. This could indicate large uncertainties in the measurements, or alternatively the inability of the underlying model to describe the observables with a given input EoS. Tighter constraints and fully conclusive statements on the EoS require accurate, high statistics data in the whole beam energy range of 2-10 GeV, which will hopefully be provided by the beam energy scan programme of STAR-FXT at RHIC, the upcoming CBM experiment at FAIR, and future experiments at HIAF and NICA. Finally, it is shown that the PointNet-based models can also be used to identify the equation of state in the CBM experiment. Despite the uncertainties due to limited detector acceptance and biases in the reconstruction algorithms, the PointNet-based models are able to learn the features that can accurately identify the underlying physics of the collision. The PointNet-based models are an ideal AI tool to study heavy-ion collisions, not only to identify the geometric event features, such as the impact parameter or the number of participant nucleons, but also to extract abstract physical features, such as the EoS, directly from the detector outputs.
Highlights:
• Assessment of body composition parameters in a large cohort of patients with HCC undergoing TACE.
• Fully automated artificial intelligence-based quantitative 3D volumetry of abdominal cavity tissue composition.
• Skeletal muscle volume and related parameters were independent prognostic factors in patients with HCC undergoing TACE.
Background & Aims: Body composition assessment (BCA) parameters have recently been identified as relevant prognostic factors for patients with hepatocellular carcinoma (HCC). Herein, we aimed to investigate the role of BCA parameters for prognosis prediction in patients with HCC undergoing transarterial chemoembolization (TACE).
Methods: This retrospective multicenter study included a total of 754 treatment-naïve patients with HCC who underwent TACE at six tertiary care centers between 2010–2020. Fully automated artificial intelligence-based quantitative 3D volumetry of abdominal cavity tissue composition was performed to assess skeletal muscle volume (SM), total adipose tissue (TAT), intra- and intermuscular adipose tissue, visceral adipose tissue, and subcutaneous adipose tissue (SAT) on pre-intervention computed tomography scans. BCA parameters were normalized to the slice number of the abdominal cavity. We assessed the influence of BCA parameters on median overall survival and performed multivariate analysis including established estimates of survival.
Results: Univariate survival analysis revealed that impaired median overall survival was predicted by low SM (p <0.001), high TAT volume (p = 0.013), and high SAT volume (p = 0.006). In multivariate survival analysis, SM remained an independent prognostic factor (p = 0.039), while TAT and SAT volumes no longer showed predictive ability. This predictive role of SM was confirmed in a subgroup analysis of patients with BCLC stage B.
Conclusions: SM is an independent prognostic factor for survival prediction. Thus, the integration of SM into novel scoring systems could potentially improve survival prediction and clinical decision-making. Fully automated approaches are needed to foster the implementation of this imaging biomarker into daily routine.
Impact and implications: Body composition assessment parameters, especially skeletal muscle volume, have been identified as relevant prognostic factors for many diseases and treatments. In this study, skeletal muscle volume has been identified as an independent prognostic factor for patients with hepatocellular carcinoma undergoing transarterial chemoembolization. Therefore, skeletal muscle volume as a metaparameter could play a role as an opportunistic biomarker in holistic patient assessment and be integrated into decision support systems. Workflow integration with artificial intelligence is essential for automated, quantitative body composition assessment, enabling broad availability in multidisciplinary case discussions.