• Title/Summary/Keyword: Trust in AI

Search Result 61, Processing Time 0.02 seconds

How Trust in Human-like AI-based Service on Social Media Will Influence Customer Engagement: Exploratory Research to Develop the Scale of Trust in Human-like AI-based Service

  • Jin Jingchuan;Shali Wu
    • Asia Marketing Journal
    • /
    • v.26 no.2
    • /
    • pp.129-144
    • /
    • 2024
  • This research is on how people's trust in human-like AI-based service will influence customer engagement (CE). This study will discuss the relationship between trust and CE and explore how people's trust in AI affects CE when they lack knowledge of the company/brand. Items from the philosophical study of trust were extracted to build a scale suitable for trust in AI. The scale's reliability was ensured, and six components of trust in AI were merged into three dimensions: trust based on Quality Assurance, Risk-taking, and Corporate Social Responsibility. Trust based on quality assurance and risk-taking is verified to positively impact customer engagement, and the feelings about AI-based service fully mediate between all three dimensions of trust in AI and CE. The new trust scale for human-like AI-based services on social media sheds light on further research. The relationship between trust in AI and CE provides a theoretical basis for subsequent research.

User Factors and Trust in ChatGPT: Investigating the Relationship between Demographic Variables, Experience with AI Systems, and Trust in ChatGPT (사용자 특성과 ChatGPT 신뢰의 관계 : 인구통계학적 변수와 AI 경험의 영향)

  • Park Yeeun;Jang Jeonghoon
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.4
    • /
    • pp.53-71
    • /
    • 2023
  • This study explores the relationship between various user factors and the level of trust in ChatGPT, a sophisticated language model exhibiting human-like capabilities. Specifically, we considered demographic characteristics such as age, education, gender, and major, along with factors related to previous AI experience, including duration, frequency, proficiency, perception, and familiarity. Through a survey of 140 participants, comprising 71 females and 69 males, we collected and analyzed the data to see how these user factors have a relationship with trust in ChatGPT. Both descriptive and inferential statistical methods, encompassing multiple linear regression models, were employed in our analysis. Our findings reveal significant relationships between user factors such as gender, the perception of prior AI interactions, self-evaluated proficiency, and Trust in ChatGPT. This research not only enhances our understanding of trust in artificial intelligence but also offers valuable insights for AI developers and practitioners in the field.

An Evaluation of Determinants to Viewer Acceptance of Artificial Intelligence-based News Anchor (인공지능(AI) 기술 기반의 뉴스 앵커에 대한 수용 의도의 선행요인 연구)

  • Shin, Ha-Yan;Kweon, Sang-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.4
    • /
    • pp.205-219
    • /
    • 2021
  • The present study identified determinants to user acceptance of artificial intelligence(AI)-based news anchor. Our conceptual model included three constructs of ability, benevolence, and integrity to determine whether these three constructs are predictive of trust perceived from AI news anchor. This work further examined the influences of social presence, anthropomorphism, perceived usefulness, understanding as well as trust as immediate determinants to user acceptance. The conceptual model was validated on survey data collected from 513 respondents. A series of scale refinement process was conducted by the examination of data normality, common method bias, structure of latent variables as well as internal consistency. In addition, a confirmatory factor analysis was performed to assess the extent to which the sample data collected from survey study measures the constructs adequately. The results from the analysis of structural equation model indicated that, (1) two constructs of ability and integrity were found to be significantly predictive of perceived trust, and (2) anthropomorphism, perceived usefulness, and trust emerged as significant and positive predictors of user acceptance of AI-based news anchor.

The Structural Relationships of between AI-based Voice Recognition Service Characteristics, Interactivity and Intention to Use (AI기반 음성인식 서비스 특성과 상호 작용성 및 이용 의도 간의 구조적 관계)

  • Lee, SeoYoung
    • Journal of Information Technology Services
    • /
    • v.20 no.5
    • /
    • pp.189-207
    • /
    • 2021
  • Voice interaction combined with artificial intelligence is poised to revolutionize human-computer interactions with the advent of virtual assistants. This paper is analyzing interactive elements of AI-based voice recognition services such as sympathy, assurance, intimacy, and trust on intention to use. The questionnaire was carried out for 284 smartphone/smart TV users in Korea. The collected data was analyzed by structural equation model analysis and bootstrapping. The key results are as follows. First, AI-based voice recognition service characteristics such as sympathy, assurance, intimacy, and trust have positive effects on interactivity with the AI-based voice recognition service. Second, the interactivity with the AI-based voice recognition service has positive effects on intention to use. Third, AI-based voice recognition service characteristics such as interactional enjoyment and intimacy have directly positive effects on intention to use. Fourth, AI-based voice recognition service characteristics such as sympathy, assurance, intimacy and trust have indirectly positive effects on intention to use the AI-based voice recognition service by mediating the effect of the interactivity with the AI-based voice recognition service. It is meaningful to investigate factors affecting the interactivity and intention to use voice recognition assistants. It has practical and academic implications.

Effects on the continuous use intention of AI-based voice assistant services: Focusing on the interaction between trust in AI and privacy concerns (인공지능 기반 음성비서 서비스의 지속이용 의도에 미치는 영향: 인공지능에 대한 신뢰와 프라이버시 염려의 상호작용을 중심으로)

  • Jang, Changki;Heo, Deokwon;Sung, WookJoon
    • Informatization Policy
    • /
    • v.30 no.2
    • /
    • pp.22-45
    • /
    • 2023
  • In research on the use of AI-based voice assistant services, problems related to the user's trust and privacy protection arising from the experience of service use are constantly being raised. The purpose of this study was to investigate empirically the effects of individual trust in AI and online privacy concerns on the continued use of AI-based voice assistants, specifically the impact of their interaction. In this study, question items were constructed based on previous studies, with an online survey conducted among 405 respondents. The effect of the user's trust in AI and privacy concerns on the adoption and continuous use intention of AI-based voice assistant services was analyzed using the Heckman selection model. As the main findings of the study, first, AI-based voice assistant service usage behavior was positively influenced by factors that promote technology acceptance, such as perceived usefulness, perceived ease of use, and social influence. Second, trust in AI had no statistically significant effect on AI-based voice assistant service usage behavior but had a positive effect on continuous use intention. Third, the privacy concern level was confirmed to have the effect of suppressing continuous use intention through interaction with trust in AI. These research results suggest the need to strengthen user experience through user opinion collection and action to improve trust in technology and alleviate users' concerns about privacy as governance for realizing digital government. When introducing artificial intelligence-based policy services, it is necessary to disclose transparently the scope of application of artificial intelligence technology through a public deliberation process, and the development of a system that can track and evaluate privacy issues ex-post and an algorithm that considers privacy protection is required.

The Effect of Motivated Consumer Innovativeness on Perceived Value and Intention to Use for Senior Customers at AI Food Service Store

  • LEE, JeungSun;KWAK, Min-Kyu;CHA, Seong-Soo
    • Journal of Distribution Science
    • /
    • v.19 no.9
    • /
    • pp.91-100
    • /
    • 2021
  • Purpose: This study investigates the use intention of artificial intelligence (AI) food service stores for senior customers, which are becoming a trend in the service industry. Research design, data and methodology: For the study, the extended technology acceptance model (TAM) and motivated consumer innovativeness (MCI) variables, proven by existing researchers, were used. In addition to the effect of motivated consumer innovativeness on customer value, we investigated the effect of customer value on trust and use intention. For the study, 520 questionnaires were distributed online by an expert survey agency. Data was verified through validity and reliability. Results: The analysis results of the research hypothesis verified that functionally motivated consumer innovativeness (fMCI), hedonically motivated consumer innovativeness (hMCI), and socially motivated consumer innovativeness (sMCI) all had positive effects on usefulness and enjoyment. Furthermore, usefulness had a statistically significant positive effect on trust, but perceived enjoyment did not; trust was found to positively affect the intention to use. Conclusions: We compared the moderating effects of seniors' gender and age (at 60) between groups. Although there was no moderating effect of age, it was verified that regarding the effect of usefulness on trust, the male group showed a greater influence than the female group.

An Exploratory Study on the Trustworthiness Analysis of Generative AI (생성형 AI의 신뢰도에 대한 탐색적 연구)

  • Soyon Kim;Ji Yeon Cho;Bong Gyou Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.79-90
    • /
    • 2024
  • This study focused on user trust in ChatGPT, a generative AI technology, and explored the factors that affect usage status and intention to continue using, and whether the influence of trust varies depending on the purpose. For this purpose, the survey was conducted targeting people in their 20s and 30s who use ChatGPT the most. The statistical analysis deploying IBM SPSS 27 and SmartPLS 4.0. A structural equation model was formulated on the foundation of Bhattacherjee's Expectation-Confirmation Model (ECM), employing path analysis and Multi-Group Analysis (MGA) for hypothesis validation. The main findings are as follows: Firstly, ChatGPT is mainly used for specific needs or objectives rather than as a daily tool. The majority of users are cognizant of its hallucination effects; however, this did not hinder its use. Secondly, the hypothesis testing indicated that independent variables such as expectation- confirmation, perceived usefulness, and user satisfaction all exert a positive influence on the dependent variable, the intention for continuance intention. Thirdly, the influence of trust varied depending on the user's purpose in utilizing ChatGPT. trust was significant when ChatGPT is used for information retrieval but not for creative purposes. This study will be used to solve reliability problems in the process of introducing generative AI in society and companies in the future and to establish policies and derive improvement measures for successful employment.

A Comparative Study of Potential Job Candidates' Perceptions of an AI Recruiter and a Human Recruiter (인공지능 인사담당자와 인간 인사담당자에 대한 잠재적 입사지원자들의 인식 비교 연구)

  • Min, Jihyun;Kim, Sinae;Park, Yonguk;Sohn, Young Woo
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.5
    • /
    • pp.191-202
    • /
    • 2018
  • Artificial intelligence (AI) is already being utilized in certain personnel selection processes in organizations; AI will eventually make even final decisions for personnel selection. The present study investigated potential job candidates' perceptions of an AI recruiter by comparing the selection procedures carried out by an AI recruiter to those carried out by a human recruiter. For this study college students in South Korea were recruited. They were each shown one of two recruitment scenarios (human recruiter vs. AI recruiter; between-subject design) followed by questionnaires measuring their satisfaction with the selection procedures and procedural justice, their trust in the recruiter, and their belief in a just world. Results show that potential job candidates were more satisfied with the selection procedures used by the AI recruiter than the human recruiter; they perceived the procedures as fairer than those used by the human recruiter. In addition, potential job candidates' trust in the AI recruiter was significantly higher than their trust in the human recruiter. This study also explored whether potential job candidates' perceptions of the AI and human recruiter were contingent upon their beliefs in a just world. The present study suggests a direction for future research.

Evaluating the Current State of ChatGPT and Its Disruptive Potential: An Empirical Study of Korean Users

  • Jiwoong Choi;Jinsoo Park;Jihae Suh
    • Asia pacific journal of information systems
    • /
    • v.33 no.4
    • /
    • pp.1058-1092
    • /
    • 2023
  • This study investigates the perception and adoption of ChatGPT (a large language model (LLM)-based chatbot created by OpenAI) among Korean users and assesses its potential as the next disruptive innovation. Drawing on previous literature, the study proposes perceived intelligence and perceived anthropomorphism as key differentiating factors of ChatGPT from earlier AI-based chatbots. Four individual motives (i.e., perceived usefulness, ease of use, enjoyment, and trust) and two societal motives (social influence and AI anxiety) were identified as antecedents of ChatGPT acceptance. A survey was conducted within two Korean online communities related to artificial intelligence, the findings of which confirm that ChatGPT is being used for both utilitarian and hedonic purposes, and that perceived usefulness and enjoyment positively impact the behavioral intention to adopt the chatbot. However, unlike prior expectations, perceived ease-of-use was not shown to exert significant influence on behavioral intention. Moreover, trust was not found to be a significant influencer to behavioral intention, and while social influence played a substantial role in adoption intention and perceived usefulness, AI anxiety did not show a significant effect. The study confirmed that perceived intelligence and perceived anthropomorphism are constructs that influence the individual factors that influence behavioral intention to adopt and highlights the need for future research to deconstruct and explore the factors that make ChatGPT "enjoyable" and "easy to use" and to better understand its potential as a disruptive technology. Service developers and LLM providers are advised to design user-centric applications, focus on user-friendliness, acknowledge that building trust takes time, and recognize the role of social influence in adoption.

The Effect of AI Chatbot Service Experience and Relationship Quality on Continuous Use Intention and Recommendation Intention (AI챗봇 서비스 사용경험이 관계품질과 행동의도에 미치는 영향)

  • Choi, Sang Mook;Choi, Do Young
    • Journal of Service Research and Studies
    • /
    • v.13 no.3
    • /
    • pp.82-104
    • /
    • 2023
  • This study analyzes the effect of users' experiences using AI chatbot services on relationship quality and behavioral intention. For the study, a survey was conducted on users who experienced AI chatbot services, and the research hypothesis was verified by analyzing the final 299 copies of valid data. As a result of the analysis, it was confirmed that satisfaction and trust, which are the relationship quality dimensions of AI chatbot service, were formed in users through the cognitive experience, emotional experience, and relational experience. In addition, it was confirmed that satisfaction and trust have a positive effect on the intention to continue using and recommending AI chatbot services, which correspond to the level of consumers' behavioral intentions, respectively. In addition, in terms of relationship quality, it was significant in all paths of the road of behavior, but in satisfaction, the path coefficient of the road of continuous use of AI chatbot and recommended road was significantly higher than the path coefficient in trust. This study provided a theoretical foundation that the relationship with relationship quality that affects behavioral intention also affects AI chatbot services in the online environment, and it is significant in that it suggests that relationship quality is an important mediating factor in establishing long-term relationships with consumers.