Refine
Year of publication
Document Type
- Article (24)
- Working Paper (8)
- Part of Periodical (4)
Has Fulltext
- yes (36)
Is part of the Bibliography
- no (36)
Keywords
- Artificial Intelligence (3)
- Bitcoin (2)
- COVID-19 news (2)
- Comments disabled (2)
- Cryptocurrency (2)
- Financial Institutions (2)
- Machine learning (2)
- Adoption (1)
- Advertisement disclosure (1)
- Advertising performance (1)
- Algorithmic Discrimination (1)
- Algorithmic transparency (1)
- Artificial intelligence (1)
- Batch Learning (1)
- Belief up-dating (1)
- Causal Machine Learning (1)
- Collaboration network (1)
- Collaboration types (1)
- Complementary mobility services (1)
- Core-component reuse (1)
- Discrete choice experiment (1)
- Dual response (1)
- Economics (1)
- Electric vehicles (1)
- Enriched Digital Footprint (1)
- Experts (1)
- Explainable machine learning (1)
- Feedback loop (1)
- Game Theory (1)
- Gatekeeper position (1)
- Green Nudging (1)
- Heavy and light users (1)
- Influencer marketing (1)
- Information processing (1)
- Information systems (1)
- Knowledge (1)
- Location-based games (1)
- Longitudinal data (1)
- Machine teaching (1)
- Mobile games (1)
- Product life cycle (1)
- Product returns (1)
- Sample-based longitudinal study (1)
- Sentiment Analysis (1)
- Sentiment analysis (1)
- Social networking site (1)
- Stock Markets (1)
- Stock markets (1)
- Usage intensity (1)
- XAI (1)
- base stations (1)
- batteries (1)
- conjoint analysis (1)
- consumer behavior (1)
- cooperation (1)
- deep learning (1)
- device-to-device communication (1)
- economic rationality (1)
- financial decision support (1)
- goal orientation (1)
- internet (1)
- interoperability (1)
- large language models (1)
- mobile communication (1)
- performance evaluation (1)
- prediction (1)
- receivers (1)
- smart home (1)
- smart living (1)
- user preferences (1)
- user study (1)
- web of things (1)
- willingness to forward (1)
- wireless communication (1)
- wireless networks (1)
Institute
NEW TECHNOLOGIES LIKE GRID COMPUTING WHICH CAN CONNECT RESOURCES AT DIVERSE LOCATIONS ARE MORE AND MORE ADOPTED FROM ORGANIZATIONS. SUCH TECHNOLOGIES CAN BOTH TRIGGER LINKAGES BETWEEN ORGANIZATIONS AND DIFFERENT DEPARTMENTS IN ONE SINGLE ORGANIZATION. WE DEVELOP A MODEL WHICH ACCOUNTS BOTH FOR INTER- AND INTRA-ORGANIZATIONAL INFLUENCE FACTORS ON THE ADOPTION PROCESS AND EMPIRICALLY IDENTIFIES THE MOST SIGNIFICANT INFLUENCE FACTORS.
Using experimental data from a comprehensive field study, we explore the causal effects of algorithmic discrimination on economic efficiency and social welfare. We harness economic, game-theoretic, and state-of-the-art machine learning concepts allowing us to overcome the central challenge of missing counterfactuals, which generally impedes assessing economic downstream consequences of algorithmic discrimination. This way, we are able to precisely quantify downstream efficiency and welfare ramifications, which provides us a unique opportunity to assess whether the introduction of an AI system is actually desirable. Our results highlight that AI systems’ capabilities in enhancing welfare critically depends on the degree of inherent algorithmic biases. While an unbiased system in our setting outperforms humans and creates substantial welfare gains, the positive impact steadily decreases and ultimately reverses the more biased an AI system becomes. We show that this relation is particularly concerning in selective-labels environments, i.e., settings where outcomes are only observed if decision-makers take a particular action so that the data is selectively labeled, because commonly used technical performance metrics like the precision measure are prone to be deceptive. Finally, our results depict that continued learning, by creating feedback loops, can remedy algorithmic discrimination and associated negative effects over time.
This article discusses the counterpart of interactive machine learning, i.e., human learning while being in the loop in a human-machine collaboration. For such cases we propose the use of a Contradiction Matrix to assess the overlap and the contradictions of human and machine predictions. We show in a small-scaled user study with experts in the area of pneumology (1) that machine-learning based systems can classify X-rays with respect to diseases with a meaningful accuracy, (2) humans partly use contradictions to reconsider their initial diagnosis, and (3) that this leads to a higher overlap between human and machine diagnoses at the end of the collaboration situation. We argue that disclosure of information on diagnosis uncertainty can be beneficial to make the human expert reconsider her or his initial assessment which may ultimately result in a deliberate agreement. In the light of the observations from our project, it becomes apparent that collaborative learning in such a human-in-the-loop scenario could lead to mutual benefits for both human learning and interactive machine learning. Bearing the differences in reasoning and learning processes of humans and intelligent systems in mind, we argue that interdisciplinary research teams have the best chances at tackling this undertaking and generating valuable insights.
COVID-19 HAS AGAIN TIGHTENED ITS GRIP AROUND THE WORLD AND THE HEALTH SYSTEM. THIS ARTICLE GIVES AN INTRODUCTION TO EXPLAINABLE INTERACTIVE MACHINE LEARNING AND PROVIDES INSIGHTS ON HOW THIS METHOD MAY NOT ONLY HELP IN ENGINEERING MORE POWERFUL AI SYSTEMS, BUT ALSO HOW IT MAY HELP TO EASE THE BURDEN OF VIRAL STRAINS ON THE HEALTHCARE SYSTEM.
The mobile games business is an ever-increasing sub-sector of the entertainment industry. Due to its high profitability but also high risk and competitive atmosphere, game publishers need to develop strategies that allow them to release new products at a high rate, but without compromising the already short lifespan of the firms' existing games. Successful game publishers must enlarge their user base by continually releasing new and entertaining games, while simultaneously motivating the current user base of existing games to remain active for more extended periods. Since the core-component reuse strategy has proven successful in other software products, this study investigates the advantages and drawbacks of this strategy in mobile games. Drawing on the widely accepted Product Life Cycle concept, the study investigates whether the introduction of a new mobile game built with core-components of an existing mobile game curtails the incumbent's product life cycle. Based on real and granular data on the gaming activity of a popular mobile game, the authors find that by promoting multi-homing (i.e., by smartly interlinking the incumbent and new product with each other so that users start consuming both games in parallel), the core-component reuse strategy can prolong the lifespan of the incumbent game.
In current discussions on large language models (LLMs) such as GPT, understanding their ability to emulate facets of human intelligence stands central. Using behavioral economic paradigms and structural models, we investigate GPT’s cooperativeness in human interactions and assess its rational goal-oriented behavior. We discover that GPT cooperates more than humans and has overly optimistic expectations about human cooperation. Intriguingly, additional analyses reveal that GPT’s behavior isn’t random; it displays a level of goal-oriented rationality surpassing human counterparts. Our findings suggest that GPT hyper-rationally aims to maximize social welfare, coupled with a strive of self-preservation. Methodologically, our esearch highlights how structural models, typically employed to decipher human behavior, can illuminate the rationality and goal-orientation of LLMs. This opens a compelling path for future research into the intricate rationality of sophisticated, yet enigmatic artificial agents.
Device-to-device (D2D) communication is an innovative solution for improving wireless network performance to efficiently handle the ever-increasing mobile data traffic. Communication takes place directly between two devices that are in each other’s transmission range. So far, research has focused on the technical challenges of implementing this technology and assumes a user’s general willingness to participate as forwarder in this technology. However, this simplifying assumption is not realistic, as willingness to participate in D2D communication can vary depending on the user. In this work, we consider the scenario that a user can act as a forwarder for a receiver who is not directly or insufficiently reached by the base station and accordingly has no or poor Internet connection. We take a user-centric approach and investigate the willingness to provide an Internet connection as a forwarder. We are the first to investigate user preferences for D2D communication using a choice-based conjoint analysis. Our results, based on a representative sample of potential users (N=181), show that the social relationship between the potential forwarder and the receiver has the greatest impact on the potential forwarder’s decision to provide an Internet connection to the receiver, accepting sacrifices in terms of additional battery consumption and reduced own service performance. In a detailed segment analysis, we observe significant preference differences depending on smartphone usage behavior and user age. Taking the corresponding preferences into account when matching forwarders and receivers can further increase technology adoption.
In the upcoming years, the internet of things (IoT)will enrich daily life. The combination of artificial intelligence(AI) and highly interoperable systems will bring context-sensitive multi-domain services to reality. This paper describesa concept for an AI-based smart living platform with open-HAB, a smart home middleware, and Web of Things (WoT) askey components of our approach. The platform concept con-siders different stakeholders, i.e. the housing industry, serviceproviders, and tenants. These activities are part of the Fore-Sight project, an AI-driven, context-sensitive smart living plat-form.
Optimal investment decisions by institutional investors require accurate predictions with respect to the development of stock markets. Motivated by previous research that revealed the unsatisfactory performance of existing stock market prediction models, this study proposes a novel prediction approach. Our proposed system combines Artificial Intelligence (AI) with data from Virtual Investment Communities (VICs) and leverages VICs’ ability to support the process of predicting stock markets. An empirical study with two different models using real data shows the potential of the AI-based system with VICs information as an instrument for stock market predictions. VICs can be a valuable addition but our results indicate that this type of data is only helpful in certain market phases.
Chatbots become human(like): the influence of gender on cooperative interactions with chatbots
(2019)
CURRENT TECHNOLOGICAL ADVANCEMENTS OF CONVERSATIONAL AGENTS (CAs) PROMISE NEW POTENTIALS FOR HUMAN-COMPUTER COLLABORATIONS. YET, BOTH PRACTITIONERS AND RESEARCHERS FACE CHALLENGES IN DESIGNING THESE INFORMATION SYSTEMS, SUCH THAT CAs NOT ONLY INCREASE IN INTELLIGENCE BUT ALSO IN EFFECTIVENESS. THROUGH OUR RESEARCH ENDEAVOUR, WE PROVIDE NEW AND COUNTERINTUITIVE INSIGHTS THAT ARE CRUCIAL FOR THE EFFECTIVE DESIGN OF COOPERATIVE CAs.
Having a gatekeeper position in a collaborative network offers firms great potential to gain competitive advantages. However, it is not well understood what kind of collaborations are associated with such a position. Conceptually grounded in social network theory, this study draws on the resource-based view and the relational factors view to investigate which types of collaboration characterize firms that are in a gatekeeper position, which ultimately could improve firm performance in subsequent periods. The empirical analysis utilizes a unique longitudinal data set to examine dynamic network formation. We used a data crawling approach to reconstruct collaboration networks among the 500 largest companies in Germany over nine years and matched these networks with performance data. The results indicate that firms in gatekeeper positions often engage in medium-intensity collaborations and less likely weak-intensity collaborations. Strong-intensity collaborations are not related to the likelihood of being a gatekeeper. Our study further reveals that a firm's knowledge base is an important moderator and that this knowledge base can increase the benefits of having a gatekeeper position in terms of firm performance.
Business practitioners increasingly use Artificial Intelligence (AI) applications to assist customers in making decisions due to their higher prediction quality. Yet, customers are frequently reluctant to rely on advice generated from machines, especially when their decision is at stake. Our study proposes a solution, which is to bring a human expert in the loop of machine advice. We empirically test whether customers are more accepting expert-AI collaborative advice than expert or AI advice.
Recent regulatory measures such as the European Union’s AI Act re-quire artificial intelligence (AI) systems to be explainable. As such, under-standing how explainability impacts human-AI interaction and pinpoint-ing the specific circumstances and groups affected, is imperative. In this study, we devise a formal framework and conduct an empirical investiga-tion involving real estate agents to explore the complex interplay between explainability of and delegation to AI systems. On an aggregate level, our findings indicate that real estate agents display a higher propensity to delegate apartment evaluations to an AI system when its workings are explainable, thereby surrendering control to the machine. However, at an individual level, we detect considerable heterogeneity. Agents possess-ing extensive domain knowledge are generally more inclined to delegate decisions to AI and minimize their effort when provided with explana-tions. Conversely, agents with limited domain knowledge only exhibit this behavior when explanations correspond with their preconceived no-tions regarding the relationship between apartment features and listing prices. Our results illustrate that the introduction of explainability in AI systems may transfer the decision-making control from humans to AI under the veil of transparency, which has notable implications for policy makers and practitioners that we discuss.
With free delivery of products virtually being a standard in E-commerce, product returns pose a major challenge for online retailers and society. For retailers, product returns involve significant transportation, labor, disposal, and administrative costs. From a societal perspective, product returns contribute to greenhouse gas emissions and packaging disposal and are often a waste of natural resources. Therefore, reducing product returns has become a key challenge. This paper develops and validates a novel smart green nudging approach to tackle the problem of product returns during customers’ online shopping processes. We combine a green nudge with a novel data enrichment strategy and a modern causal machine learning method. We first run a large-scale randomized field experiment in the online shop of a German fashion retailer to test the efficacy of a novel green nudge. Subsequently, we fuse the data from about 50,000 customers with publicly-available aggregate data to create what we call enriched digital footprints and train a causal machine learning system capable of optimizing the administration of the green nudge. We report two main findings: First, our field study shows that the large-scale deployment of a simple, low-cost green nudge can significantly reduce product returns while increasing retailer profits. Second, we show how a causal machine learning system trained on the enriched digital footprint can amplify the effectiveness of the green nudge by “smartly” administering it only to certain types of customers. Overall, this paper demonstrates how combining a low-cost marketing instrument, a privacy-preserving data enrichment strategy, and a causal machine learning method can create a win-win situation from both an environmental and economic perspective by simultaneously reducing product returns and increasing retailers’ profits.
This paper explores the interplay of feature-based explainable AI (XAI) tech- niques, information processing, and human beliefs. Using a novel experimental protocol, we study the impact of providing users with explanations about how an AI system weighs inputted information to produce individual predictions (LIME) on users’ weighting of information and beliefs about the task-relevance of information. On the one hand, we find that feature-based explanations cause users to alter their mental weighting of available information according to observed explanations. On the other hand, explanations lead to asymmetric belief adjustments that we inter- pret as a manifestation of the confirmation bias. Trust in the prediction accuracy plays an important moderating role for XAI-enabled belief adjustments. Our results show that feature-based XAI does not only superficially influence decisions but re- ally change internal cognitive processes, bearing the potential to manipulate human beliefs and reinforce stereotypes. Hence, the current regulatory efforts that aim at enhancing algorithmic transparency may benefit from going hand in hand with measures ensuring the exclusion of sensitive personal information in XAI systems. Overall, our findings put assertions that XAI is the silver bullet solving all of AI systems’ (black box) problems into perspective.
Advances in Machine Learning (ML) led organizations to increasingly implement predictive decision aids intended to improve employees’ decision-making performance. While such systems improve organizational efficiency in many contexts, they might be a double-edged sword when there is the danger of a system discontinuance. Following cognitive theories, the provision of ML-based predictions can adversely affect the development of decision-making skills that come to light when people lose access to the system. The purpose of this study is to put this assertion to the test. Using a novel experiment specifically tailored to deal with organizational obstacles and endogeneity concerns, we show that the initial provision of ML decision aids can latently prevent the development of decision-making skills which later becomes apparent when the system gets discontinued. We also find that the degree to which individuals 'blindly' trust observed predictions determines the ultimate performance drop in the post-discontinuance phase. Our results suggest that making it clear to people that ML decision aids are imperfect can have its benefits especially if there is a reasonable danger of (temporary) system discontinuances.
The present study investigates the moderating effect of usage intensity of the social networking site (SNS) Instagram (IG) on the influence of advertisement disclosure types on advertising performance. A national sample (N = 566) participated in a randomized online experiment including a real influencer and followers in order to investigate how different advertisement disclosure types affect advertising performance and how usage intensity moderates this effect. We find that disclosing an influencer’s postings with “#ad” increases the trustworthiness of the influencer and the general credibility of the posting for heavy users, but not for light users. Followership of a user has been found to strongly improve all researched variables (attitude toward product placement, trustworthiness of the spokesperson and general credibility of the posting). This study adds to literature the first distinction on heavy and light usage intensity, and on followership of an IG user when regarding the effects of advertisement disclosure types on advertising performance. To conclude, we present a number of recommendations regarding how advertisers, influencers, and SNS providers should develop strategies for monitoring, understanding, and responding to different social media users, e.g., to closely monitor an influencer’s audience to identify heavy users and optimally target them.
Artificial Intelligence (AI) and Machine Learning (ML) are currently hot topics in industry and business practice, while management-oriented research disciplines seem reluctant to adopt these sophisticated data analytics methods as research instruments. Even the Information Systems (IS) discipline with its close connections to Computer Science seems to be conservative when conducting empirical research endeavors. To assess the magnitude of the problem and to understand its causes, we conducted a bibliographic review on publications in high-level IS journals. We reviewed 1,838 articles that matched corresponding keyword-queries in journals from the AIS senior scholar basket, Electronic Markets and Decision Support Systems (Ranked B). In addition, we conducted a survey among IS researchers (N = 110). Based on the findings from our sample we evaluate different potential causes that could explain why ML methods are rather underrepresented in top-tier journals and discuss how the IS discipline could successfully incorporate ML methods in research undertakings.
Nowadays, firms lack information to derive the share of wallet, a vital metric that identifies how much additional spending a firm could capture from each customer. However, decoding Blockchain data enables observing all transactions of each wallet, respectively customer, on the Ethereum NFT market. To shed light on the share of wallet, we analyzed 22.7 million transactions from over 1.3 million customers across eight competing firms on the Ethereum NFT market.
The recent COVID-19 pandemic represents an unprecedented worldwide event to study the influence of related news on the financial markets, especially during the early stage of the pandemic when information on the new threat came rapidly and was complex for investors to process. In this paper, we investigate whether the flow of news on COVID-19 had an impact on forming market expectations. We analyze 203,886 online articles dealing with COVID-19 and published on three news platforms (MarketWatch.com, NYTimes.com, and Reuters.com) in the period from January to June 2020. Using machine learning techniques, we extract the news sentiment through a financial market-adapted BERT model that enables recognizing the context of each word in a given item. Our results show that there is a statistically significant and positive relationship between sentiment scores and S&P 500 market. Furthermore, we provide evidence that sentiment components and news categories on NYTimes.com were differently related to market returns.