Refine
Year of publication
Document Type
- Article (24)
- Working Paper (8)
- Part of Periodical (4)
Has Fulltext
- yes (36)
Is part of the Bibliography
- no (36) (remove)
Keywords
- Artificial Intelligence (3)
- Bitcoin (2)
- COVID-19 news (2)
- Comments disabled (2)
- Cryptocurrency (2)
- Financial Institutions (2)
- Machine learning (2)
- Adoption (1)
- Advertisement disclosure (1)
- Advertising performance (1)
Institute
Optimal investment decisions by institutional investors require accurate predictions with respect to the development of stock markets. Motivated by previous research that revealed the unsatisfactory performance of existing stock market prediction models, this study proposes a novel prediction approach. Our proposed system combines Artificial Intelligence (AI) with data from Virtual Investment Communities (VICs) and leverages VICs’ ability to support the process of predicting stock markets. An empirical study with two different models using real data shows the potential of the AI-based system with VICs information as an instrument for stock market predictions. VICs can be a valuable addition but our results indicate that this type of data is only helpful in certain market phases.
Chatbots become human(like): the influence of gender on cooperative interactions with chatbots
(2019)
CURRENT TECHNOLOGICAL ADVANCEMENTS OF CONVERSATIONAL AGENTS (CAs) PROMISE NEW POTENTIALS FOR HUMAN-COMPUTER COLLABORATIONS. YET, BOTH PRACTITIONERS AND RESEARCHERS FACE CHALLENGES IN DESIGNING THESE INFORMATION SYSTEMS, SUCH THAT CAs NOT ONLY INCREASE IN INTELLIGENCE BUT ALSO IN EFFECTIVENESS. THROUGH OUR RESEARCH ENDEAVOUR, WE PROVIDE NEW AND COUNTERINTUITIVE INSIGHTS THAT ARE CRUCIAL FOR THE EFFECTIVE DESIGN OF COOPERATIVE CAs.
Having a gatekeeper position in a collaborative network offers firms great potential to gain competitive advantages. However, it is not well understood what kind of collaborations are associated with such a position. Conceptually grounded in social network theory, this study draws on the resource-based view and the relational factors view to investigate which types of collaboration characterize firms that are in a gatekeeper position, which ultimately could improve firm performance in subsequent periods. The empirical analysis utilizes a unique longitudinal data set to examine dynamic network formation. We used a data crawling approach to reconstruct collaboration networks among the 500 largest companies in Germany over nine years and matched these networks with performance data. The results indicate that firms in gatekeeper positions often engage in medium-intensity collaborations and less likely weak-intensity collaborations. Strong-intensity collaborations are not related to the likelihood of being a gatekeeper. Our study further reveals that a firm's knowledge base is an important moderator and that this knowledge base can increase the benefits of having a gatekeeper position in terms of firm performance.
Business practitioners increasingly use Artificial Intelligence (AI) applications to assist customers in making decisions due to their higher prediction quality. Yet, customers are frequently reluctant to rely on advice generated from machines, especially when their decision is at stake. Our study proposes a solution, which is to bring a human expert in the loop of machine advice. We empirically test whether customers are more accepting expert-AI collaborative advice than expert or AI advice.
Recent regulatory measures such as the European Union’s AI Act re-quire artificial intelligence (AI) systems to be explainable. As such, under-standing how explainability impacts human-AI interaction and pinpoint-ing the specific circumstances and groups affected, is imperative. In this study, we devise a formal framework and conduct an empirical investiga-tion involving real estate agents to explore the complex interplay between explainability of and delegation to AI systems. On an aggregate level, our findings indicate that real estate agents display a higher propensity to delegate apartment evaluations to an AI system when its workings are explainable, thereby surrendering control to the machine. However, at an individual level, we detect considerable heterogeneity. Agents possess-ing extensive domain knowledge are generally more inclined to delegate decisions to AI and minimize their effort when provided with explana-tions. Conversely, agents with limited domain knowledge only exhibit this behavior when explanations correspond with their preconceived no-tions regarding the relationship between apartment features and listing prices. Our results illustrate that the introduction of explainability in AI systems may transfer the decision-making control from humans to AI under the veil of transparency, which has notable implications for policy makers and practitioners that we discuss.
With free delivery of products virtually being a standard in E-commerce, product returns pose a major challenge for online retailers and society. For retailers, product returns involve significant transportation, labor, disposal, and administrative costs. From a societal perspective, product returns contribute to greenhouse gas emissions and packaging disposal and are often a waste of natural resources. Therefore, reducing product returns has become a key challenge. This paper develops and validates a novel smart green nudging approach to tackle the problem of product returns during customers’ online shopping processes. We combine a green nudge with a novel data enrichment strategy and a modern causal machine learning method. We first run a large-scale randomized field experiment in the online shop of a German fashion retailer to test the efficacy of a novel green nudge. Subsequently, we fuse the data from about 50,000 customers with publicly-available aggregate data to create what we call enriched digital footprints and train a causal machine learning system capable of optimizing the administration of the green nudge. We report two main findings: First, our field study shows that the large-scale deployment of a simple, low-cost green nudge can significantly reduce product returns while increasing retailer profits. Second, we show how a causal machine learning system trained on the enriched digital footprint can amplify the effectiveness of the green nudge by “smartly” administering it only to certain types of customers. Overall, this paper demonstrates how combining a low-cost marketing instrument, a privacy-preserving data enrichment strategy, and a causal machine learning method can create a win-win situation from both an environmental and economic perspective by simultaneously reducing product returns and increasing retailers’ profits.
This paper explores the interplay of feature-based explainable AI (XAI) tech- niques, information processing, and human beliefs. Using a novel experimental protocol, we study the impact of providing users with explanations about how an AI system weighs inputted information to produce individual predictions (LIME) on users’ weighting of information and beliefs about the task-relevance of information. On the one hand, we find that feature-based explanations cause users to alter their mental weighting of available information according to observed explanations. On the other hand, explanations lead to asymmetric belief adjustments that we inter- pret as a manifestation of the confirmation bias. Trust in the prediction accuracy plays an important moderating role for XAI-enabled belief adjustments. Our results show that feature-based XAI does not only superficially influence decisions but re- ally change internal cognitive processes, bearing the potential to manipulate human beliefs and reinforce stereotypes. Hence, the current regulatory efforts that aim at enhancing algorithmic transparency may benefit from going hand in hand with measures ensuring the exclusion of sensitive personal information in XAI systems. Overall, our findings put assertions that XAI is the silver bullet solving all of AI systems’ (black box) problems into perspective.