DOI QR코드

DOI QR Code

User Factors and Trust in ChatGPT: Investigating the Relationship between Demographic Variables, Experience with AI Systems, and Trust in ChatGPT

사용자 특성과 ChatGPT 신뢰의 관계 : 인구통계학적 변수와 AI 경험의 영향

  • Received : 2023.12.01
  • Accepted : 2023.12.15
  • Published : 2023.12.30

Abstract

This study explores the relationship between various user factors and the level of trust in ChatGPT, a sophisticated language model exhibiting human-like capabilities. Specifically, we considered demographic characteristics such as age, education, gender, and major, along with factors related to previous AI experience, including duration, frequency, proficiency, perception, and familiarity. Through a survey of 140 participants, comprising 71 females and 69 males, we collected and analyzed the data to see how these user factors have a relationship with trust in ChatGPT. Both descriptive and inferential statistical methods, encompassing multiple linear regression models, were employed in our analysis. Our findings reveal significant relationships between user factors such as gender, the perception of prior AI interactions, self-evaluated proficiency, and Trust in ChatGPT. This research not only enhances our understanding of trust in artificial intelligence but also offers valuable insights for AI developers and practitioners in the field.

Keywords

Ⅰ. Introduction

Artificial intelligence (AI), particularly chatbots and language models, has made significant inroads into various sectors. Among these technologies, ChatGPT-3.5 by OpenAI has carved a niche for itself, amassing over a million users shortly after its release [1]. Yet, despite its capabilities, concerns have been raised about its reliability, and its potential for disseminating misinformation.

The growing prevalence of AI like ChatGPT-3.5 underscores the critical importance of understanding the trust users place in such systems. AI systems, such as ChatGPT, are purposefully designed to enhance human capabilities and workflows. The level of trust that users have in these collaborative endeavors directly influences their successful integration across various tasks and industries. To elaborate further, trust determines AI's societal impact, acceptance, and the effectiveness of human-machine interactions [2].

Prior research also supports the notion that trust is the cornerstone for establishing harmonious partnerships between humans and AI. It has delved into uncovering a connection between trust in AI and a range of factors, including system attributes like reliability and transparency, as well as user-specific factors such as demographic background and prior AI experiences [2-4].

Unlike many previous studies that have primarily focused on limited user factors, such as age or education, our research examines a broader range of user factors. We consider demographic characteristics like age, education, gender, and major, as well as more nuanced aspects of previous AI experiences, including duration, frequency, proficiency, perception, and familiarity. This comprehensive approach allows us to provide a more holistic understanding of the factors influencing trust in ChatGPT. In addition, we employ both descriptive and inferential statistical methods, including multiple linear regression models. This methodological diversity enables us to not only describe relationships but also assess the statistical significance of these relationships. As a result, our findings provide a more robust and nuanced understanding of how user factors relate to trust in ChatGPT. Specifically, this research aims to determine following two things:

1) How demographic factors, including age, gender, and education, influence trust users place in the system.

2) How a user's previous encounters with AI, considering aspects like duration, frequency, and proficiency, shape their trust in ChatGPT.

By delving into these aspects, we aim to not only enhance the theoretical comprehension of trust but also provide valuable practical insights for AI developers and practitioners. Through the identification of significant relationships between user factors and trust in ChatGPT, our research seeks to offer actionable guidance for enhancing AI system design, improving user experience, and fostering user trust in real-world applications, maximizing its utility.

II. Theoretical Backgrounds

2.1 Importance of Trust in AI systems

Trust, a multifaceted concept, holds varying meanings across diverse contexts [5]. Within AI, it pertains to a user's belief in a system's reliability, competence, and safety, centering on elements like predictability, dependability, and transparency [6]. It's pivotal to differentiate trust in the AI system from trust in its developers or affiliated organizations; the focus is on user perceptions of the system itself [7].

The role of trust is paramount for AI adoption and persistent usage [8]. Positive trust can bolster user interaction with AI, whereas its absence may deter its use, despite potential benefits [9]. This becomes even more pronounced for AI platforms such as ChatGPT, which rely on advanced natural language processing. Here, trust is tethered to system understanding, appropriate response generation, and user privacy assurance [10,11]. Especially in decision-making scenarios, trust in AI's recommendations being accurate and unbiased is vital [12].

The significance of cultivating trust lies in three domains: it facilitates AI integration into daily life, ensuring maximum utility [13]; promotes user satisfaction, fostering continued use and positive referrals [14]; and it can mitigate concerns about AI-associated risks and ethical dilemmas [13].

In essence, trust is indispensable for AI systems like ChatGPT. Grasping how various user factors impact trust is fundamental for crafting user-centric AI systems, leading to elevated trust and technology adoption [15].

2.2 Impact of demographic factors and prior AI experience on Trust in AI systems

Trust in AI systems is multifaceted and shaped by various individual backgrounds and characteristics. The rationale for examining disparities in the trustworthiness of technologies like ChatGPT from a demographic standpoint can be summarized as follows: Acceptance of ChatGPT is significantly influenced by an individual's level of digital literacy, as indicated by the findings of Kim Hyo-Jung [16]. Furthermore, numerous prior research underscores the substantial roles that age, gender and educational background play in shaping this digital literacy [17-20,22].

Age is a significant determinant, with younger individuals often displaying a more profound trust in AI compared to older age groups [19]. This difference can be traced back to older adults' limited technological exposure, which hampers their ability to embrace and understand novel technologies [20]. However, as older adults become more familiar with AI's potential, their trust may see an uptick [21].

Education further nuances trust dynamics. Those with higher educational backgrounds tend to possess a deeper understanding of AI, which can foster increased trust [18,22-23]. Importantly, the specific domain of AI application plays a role in moderating this relationship, as Kizilcec [24] highlighted.

Gender disparities further complicate the landscape. Men have consistently demonstrated higher trust levels in AI systems compared to women [18]. Such discrepancies could be rooted in differences in technical knowledge or varying degrees of AI familiarity. Furthermore, women's pronounced concerns related to security and privacy provide additional depth to this gendered trust divergence [20].

The interface between users and AI isn't merely shaped by demographic variables; prior experiences play a critical role in molding trust perceptions. A user's history with AI, encompassing aspects like familiarity, frequency of use, and emotional associations, can make or break their trust in these systems [7,25].

Familiarity emerges as a salient factor. As users accrue experience with AI systems, their burgeoning familiarity typically correlates with enhanced trust [6,26]. This deepened understanding enables users to recognize a system's potential and its limitations, bolstering trust [7]. Emotional residues from past AI interactions also inform current trust levels. Positive prior interactions foster trust, whereas negative experiences can create barriers [27].

Furthermore, prior AI interactions set the stage for user expectations. When AI systems align with or surpass these expectations, trust flourishes [28,29]. On the other hand, falling short can erode trust and deter subsequent AI engagements [30]. Task-specific experiences further shape trust perceptions. Mastery in one AI domain can heighten trust when faced with analogous tasks, owing to clearer insights into system capabilities [30].

Lastly, the frequency of AI interactions plays a part. Regular engagements with AI foster a more profound understanding, comfort, and, consequently, trust [6,13]. This frequent exposure can also dispel concerns about the enigmatic "black box" nature of some AI systems, promoting a more robust trust foundation [31].

While the influence of demographic factors and previous experiences on user trust in AI systems is evident, there exists a research gap in discerning and measuring these variables' specific effects in trust formation concerning emergent AI technologies. For instance, the burgeoning field of Generative AI systems, owing to its nascent stage, has seen limited exploration regarding how demographic and experiential factors shape trust. This highlights a pertinent area for future research endeavors.

III. Methods and Study Design

3.1 Research Design

The aims of this research is to examine the relationship between user factors including demographic information and past AI experiences and how it can affect users' trust in ChatGPT. To achieve this, quantitative research design was employed. The methodology involves descriptive analysis using box plots to visualize the distributions of variables of interest and Spearman correlation test to quantify relationships of Predictor and primary outcome. To further investigate the research questions, inferential statistics such as t-test and ANOVA were employed to compare means across different groups. Moreover, multiple regression analyses were conducted to assess the impact of independent variables on user's trust in ChatGPT.

3.2 Sample

To ensure the robustness and representativeness of our sample in investigating the relationship between user factors and trust in ChatGPT, we began by establishing specific criteria for participant inclusion. Prospective participants were required to be at least 18 years old and possess prior experience with AI systems. This criterion was essential to ensure that our participants had a foundational understanding of AI technologies, aligning with our research objectives.

Our recruitment strategy involved engaging participants through a survey-oriented platform renowned for its diverse user demographics. This approach aimed to encompass a wide spectrum of users with varying demographic backgrounds and levels of AI experience. To ensure gender balance within our sample, we thoughtfully recruited 69 males and 71 females. Furthermore, our recruitment strategy deliberately included participants with a broad range of educational achievements, spanning from high school degrees to bachelor's degrees, master's degrees, and advanced academic qualifications. This deliberate diversity allowed us to scrutinize the potential influence of educational attainment on trust in ChatGPT. A power analysis was conducted using G Power version 3.1.9.7 to determine minimum sample size and to achieve 80% power for detecting medium effect size at a significance level of α = .05.

3.3 Measures

Data on user factors such as demographic background, prior experience with AI systems, and trust level in ChatGPT, were collected using an online survey. For Primary outcome, Trust was measured using an online survey in three dimensions: trust in system, trust in information and relative trust compared to other information sources. 'Trust in system' was assessed based on questions 6.1 to 6.10, following the "Checklist for Trust between People and Automation" [32]. 'Trust in information' was evaluated using questions 7.1 to 7.3. adapted from Thielshch and Hirschfeld's Credibility questionnaire [33]. We also included three custom questions to assess participants' perception of the credibility and confidence in information provided by ChatGPT. The final aspect of the primary outcome involved participants rating their trust in ChatGPT compared to human experts, online forums, and traditional search engines ('Relative Trust').

<Table 1> Variables

DGTSA8_2023_v19n4_53_t0001.png 이미지

The survey also included questions about participants' ChatGPT experience, which were used as control variables to account for their potential influence on the relationship between the independent and dependent variables.

Construct validity was assessed by examining correlations among the three trust dimensions. We found strong correlations: 0.83 (p < 0.001) between trust in the system and trust in the provided information, 0.78 (p < 0.001) between trust in the system and relative trust, and 0.78 (p < 0.001) between trust in the provided information and relative trust. This shows that we have satisfactory construct validity survey questions.

3.4 Data Analysis

To compute the scalar score variable for each of the three trust outcomes, we performed Principal Component Analysis (PCA) on the relevant questions associated with each outcome. The first PCA score, which explained most of the data's variability, was adopted as the primary trust score variable for subsequent analysis.

We calculated descriptive statistics for the variables of interest, including trust in ChatGPT. This included measures of central tendency (mean, median) and dispersion (standard deviation, range). Box plots were used to visualize the characteristics and distribution of these variables, providing a comprehensive overview of the sample's demographics, AI system experiences, and trust levels in ChatGPT.

<Table 2> Descriptive statistics of sample

DGTSA8_2023_v19n4_53_t0002.png 이미지

To investigate differences in trust in ChatGPT across demographic groups and levels of AI system experience, we conducted inferential statistical tests. Specifically, we used two-sample independent t-tests and one-way ANOVA to compare trust score means among different demographic and prior AI experience groups. We employed t-tests for factors with two levels and ANOVA for factors with more than two levels.

Multiple linear regression analyses were performed to simultaneously examine the impact of demographic factors (age, gender, education level) on trust in ChatGPT. For each of the three trust outcomes (Trust in System, Trust in Information, Relative Trust), we regressed them on the demographic factors, while considering ChatGPT user experience variables as potential additional independent variables to control for their effects. For each model, a stepwise regression approach was employed to select the ChatGPT user experience variables that were highly associated with the outcome and achieve parsimony in modeling. Note that all of the demographic factors were forced in each model. Similar multiple linear regression analyses were conducted to simultaneously examine the impact of AI experience factors (duration of prior AI usage, frequency of prior AI usage, proficiency on prior AI, overall experience on prior AI, familiarity with AI knowledge) on each of the trust in ChatGPT outcomes. We used R version 4.2.2 for these analyses, with all tests being two-sided and a significance level of 0.05.

<Table 3> Multiple Linear Regression on Trust and Demographic Variables

DGTSA8_2023_v19n4_53_t0003.png 이미지

significant codes: ‘*’< 0.05

1) estimated regression coefficient (Beta) for a predictor variable

2) Standard error of the regression coefficient estimate

3) p-value for testing the null hypothesis that the regression coefficient is equal to zero

IV. Results

Boxplots were shown to illustrate the distribution of trust scores in ChatGPT users, categorized by several demographic and experience-related dimensions. These dimensions include Gender Groups, Frequency of AI usage, Proficiency of AI usage, Perception of AI usage, and Familiarity with AI Knowledge.

The primary objective of these boxplots is to visually represent the variations in trust levels among different demographic groups and levels of experience with AI systems. To assess the statistical significance of these differences, independent t-test were employed to compare the means of trust scores between different gender groups. Correlation tests were used to assess the relationship between trust in ChatGPT and the various factors related to prior AI experience including Frequency of AI usage, Proficiency of AI usage, Perception of AI usage, and Familiarity with AI knowledge.

4.1 Demographic Predictors

The results indicated that, on average, male participants had higher mean trust scores across all three sub-categories compared to female participants. Especially, the t-test result revealed that the difference, male having higher trust scores in ChatGPT than females in terms of relative trust, was significant with a p-value of 0.03. This indicates that there is a statistically significant difference in relative trust scores between males and females.

DGTSA8_2023_v19n4_53_f0001.png 이미지

<Figure 1> Boxplot of Gender vs. Trust in ChatGPT

4.2 Prior AI Experience Predictors

While the majority of participants reported a moderate frequency of AI usage, participants who indicated somewhat frequent AI usage had the highest mean trust scores across all dimensions, surpassing those who reported using AI almost every day. Moreover, the Spearman correlation analysis revealed that the variations in trust scores across the different frequency groups were statistically significant, particularly for the aspects of system trust and relative trust. Information trust was also observed to be somewhat significant, as indicated by a p-value less than 0.1.

Higher proficiency in AI usage positively associated with increased mean trust scores. The box plots consistently showed that higher levels of AI proficiency were linked to increased trust in ChatGPT across all trust categories. Furthermore, a correlation analysis revealed the relationship between the frequency of AI usage and the level of trust in ChatGPT was significant by p-values less than 0.01

DGTSA8_2023_v19n4_53_f0002.png 이미지

<Figure 2> Boxplot of AI experience vs. Trust in ChatGPT

Participants who reported more positive experiences with AI tended to have higher trust scores across all three dimensions. Box plots further bolstered this positive linear trend, demonstrating that increased positive AI experiences corresponded with higher levels of trust in ChatGPT. Moreover, the results from the Spearman correlation analysis provided strong statistical evidence confirming the significant relationship between AI experiences and trust scores, as indicated by p-values less than 0.01.

Interestingly, a distinctive trend appeared when participants reported being "very familiar" with AI concepts. Such participants showed slightly lower trust scores across all three dimensions in comparison to participants who reported being less familiar by one unit. The box plots further confirmed this observed pattern, showcasing visible correlations between levels of familiarity with AI and the corresponding trust scores. In addition, the Spearman correlation analysis showed that these differences in trust scores were statistically significant, as indicated by p-values less than 0.01.

4.3 Regression of Trust on Demographic Predictors

In relation to the demographic independent variables in the models, we found gender difference significantly associate with trust outcome. To illustrate, the male gender showed 0.89 higher expected system trust scores (p = 0.038) and 0.59 higher relative trust scores (p=0.036) than females respectively. This suggests that males tend to have higher levels of System Trust and relative trust compared to females.

<Table 4> Multiple Linear Regression on Trust and AI Experience Variables

DGTSA8_2023_v19n4_53_t0004.png 이미지

significant codes: ‘*’< 0.05

1) estimated regression coefficient (Beta) for a predictor variable

2) Standard error of the regression coefficient estimate

3) p-value for testing the null hypothesis that the regression coefficient is equal to zero

To be more specific about findings, being male and having a higher frequency of ChatGPT usage are factors linked with higher levels of System Trust. Simultaneously, being male and exhibiting higher proficiency in using ChatGPT are associated with higher levels of relative trust.

4.4 Regression of Trust on AI Experience Predictors

Upon examining the AI Experience Predictors, the Proficiency and Perception of AI experience showed statistically significant associations with System Trust. Individuals who depicted a unit score higher in proficiency resulted in 0.63 higher system trust (p = 0.009), 0.49 higher information trust (p=0.027) and 0.38 relative trust (p=0.020). Positive Perception of past AI experience were also positively associated with System Trust (p=0.003) and information trust (p = 0.046).

In terms of control variables, using ChatGPT version 3.5 was observed to be significant in the models for information trust and relative trust. Furthermore, in the relative trust model, the 'less than 6 months of ChatGPT usage' variable was identified as a notable confounding factor.

In conclusion, the findings indicate that factors related to AI experience, such as higher proficiency and better experience with prior ai services has positive association with both system and information trust. Furthermore, higher proficiency with previous AI systems was also discovered to have a significant effect on relative trust.

V . Discussion and Conclusion

5.1 Discussion

The purpose of this study was to investigate the relationship between user variables, including demographic attributes and AI experience, and their corresponding levels of trust. We employed descriptive statistics and inferential statistics to compare whether the differences in the groups were significant. Then we applied multiple linear regression model to examine the extent to which these user factors contribute to changes in trust levels.

After a detailed examination of the demographic variables, it was found that the level of trust in ChatGPT's system is influenced by gender and the frequency of ChatGPT usage, while other demographic variables such as age, level of education, and major did not show substantial correlations with trust factors in this specific study.

In a closer scrutiny of gender as a variable, it became evident that men who used ChatGPT more often exhibited higher levels of system and relative trust compared to women. This suggests potential gender-specific disparities in trust in AI systems like ChatGPT, aligning with previous research. Initial qualitative studies into these gender gaps revealed a higher level of caution amongst female users towards technology, stemming from privacy-related concerns (Shao et al, 2020). These results underscore the need for AI practitioners and developers to develop strategies aimed at addressing these trust differentials across genders. It is also imperative that future research continues to delve deeper into this area to gain additional understanding of how to reduce these disparities in trust.

Furthermore, it is important to highlight that the other demographic variables included in the multiple linear regression analysis did not show significant results, and the predictive power of the model was limited when only demographic factors were taken into account. Although the age variable didn't reach significance at the 0.05 level and couldn't be confirmed in this study, a trend was evident: younger users were seemingly more likely to express higher trust in ChatGPT. This observation could potentially be confirmed in future studies with larger sample sizes.

In terms of independent variables related to AI experience, the results suggest that factors including proficiency in previous AI usage, perception of that experience, and specific versions of ChatGPT are associated with trust levels across various dimensions. Higher proficiency in prior AI services, coupled with a positive perception of AI in general, tend to align with greater levels of trust in the ChatGPT system and the information it produces.

Notably, proficiency in using AI systems and individuals' perceptions of their AI experiences were significant contributing factors. This aligns with past research, which proposed that an individual's confidence in their AI proficiency, along with their perceived past AI experiences, help shape their trust in AI systems like ChatGPT. Previous studies about AI system proficiency suggested that understanding the system's strengths and limitations can cultivate higher levels of trust [7]. While one could assume that user proficiency might increase with prolonged use, the findings of this study urge AI practitioners to ensure sufficient user education about specific AI applications to boost their system's trustworthiness.

Moreover, the version of ChatGPT used was recognized as a confounding factor in the relationship between AI experience and trust in ChatGPT. Specifically, ChatGPT3.5 showed a positive correlation with increased levels of information trust, despite ChatGPT4 being the latest and most sophisticated version. This outcome might be attributed to the majority of users currently having access to ChatGPT3.5, while version 4 is only accessible to monthly subscribers. Thus, these findings could potentially shift as more advanced versions of ChatGPT become widely available to the general public.

Nonetheless, it's important to note that the duration or frequency of past AI usage did not substantially correlate with trust levels. The study suggests that variables related to self-assuredness and perception, rather than the frequency or length of AI usage, play a more decisive role in predicting trust levels. This may imply that users tend to judge a system's trustworthiness regardless of the duration or frequency of their past AI usage. What truly matters is the users' confidence in their AI abilities and how their past AI experiences influence their trust in the system. Based on this insight, AI practitioners should focus on strategies that improve user confidence. This can be achieved through the provision of AI-specific education, where users gain a comprehensive understanding of the capabilities and limitations of the AI service they are using. This recommendation aligns with prior research findings that emphasize the need for artificial intelligence education in schools to cultivate AI competency [34]. Our study extends this notion and suggests providers of AI services to provide AI services should offer service-specific education to maximize usability and utility.

Overall, this discovery implies that relying solely on demographic factors does not effectively predict user trust in this specific AI language model, ChatGPT. Even though earlier research suggested demographic variables as influencing user trust, the increased familiarity with AI services due to their widespread use in recent years might have led to less significant disparities among these demographic variables. Since factors related to prior AI experience held greater explanatory power in predicting trust in the system compared to demographic factors in this study, it appears more rational for AI practitioners to focus on understanding a user's previous AI experiences as predictors of their trust levels, rather than relying solely on demographic information.

5.2 Limitations and Conclusion

This study offers valuable insights, but several inherent limitations warrant consideration. Firstly, reliance on self-reported surveys may introduce biases, especially regarding AI experience. Despite providing general guidance, variations in participants' standards for reporting AI proficiency and familiarity could lead to potential misreporting.

Furthermore, the presence of multicollinearity within the model may have compromised the reliability of coefficient estimates. While Principal Component Analysis (PCA) aimed to alleviate multicollinearity effects, the study's small sample size and numerous independent variables made it challenging to ascertain which variables had stronger associations with trust levels.

Additionally, the study's focus on Korean users may introduce cultural and language biases. Korea's high rates of college entrance and prevalence of higher education may limit the diversity of participants' educational backgrounds, potentially biasing findings and limiting applicability in societies with differing educational systems.

Trust in AI systems can significantly vary across cultures and languages. It is important to note that AI effectiveness is language-dependent, and AI systems tailored for Korean may not perform as effectively in other languages. This study's findings may not generalize to diverse cultural and linguistic contexts, necessitating caution in interpretation. Further investigation is required to assess the impact of these variables on user trust in AI systems.

Another critical consideration when generalizing study results pertains to the specific AI model examined - the chat-based GPT model. This text-focused AI model fundamentally differs from other AI forms like image recognition, decision-making, or robotics AI, with distinct user interaction paradigms and trust dynamics. While trust factors identified in this study apply to chat-based GPT AI, their relevance and impact may differ for other AI forms.

Therefore, while this study provides valuable insights, it is prudent to exercise caution when applying these findings to other AI types. Additional research is needed to explore trust factors across various AI models and contexts, contributing to a comprehensive understanding of trust in AI

Nonetheless, these study findings offer significant value to AI and human-computer interaction researchers and practitioners.

The findings revealed that gender and the frequency of ChatGPT usage significantly influenced trust levels in the system, whereas other demographic variables like age, education level, and major showed no substantial correlations with trust. Specifically, men who used ChatGPT more frequently exhibited higher levels of trust, highlighting potential gender-specific disparities in trust in AI systems.

Proficiency in prior AI usage, positive perceptions of AI, and the specific version of ChatGPT used were associated with trust levels. Greater proficiency and positive perceptions of AI experiences correlated with higher trust in ChatGPT. The version of ChatGPT used also played a role, with ChatGPT3.5 showing a positive correlation with information trust, despite ChatGPT4 being the latest version. This was attributed to the wider availability of ChatGPT3.5.

Surprisingly, the duration and frequency of past AI usage did not significantly correlate with trust levels. Instead, self-assuredness and perception played a more decisive role in predicting trust. Users tended to judge a system's trustworthiness based on their confidence in their AI abilities and past AI experiences. Therefore, AI practitioners should focus on strategies to enhance user confidence, such as providing education on how to efficiently use their AI services.

Understanding user factors shaping AI trust can inform the development of user-centric AI technologies, promote adoption, and guide trust-building strategies. Customizing AI systems for diverse demographics and launching educational initiatives addressing specific user concerns can enhance trust and enrich experiences. Moreover, these findings can inform the development of user personas in the user experience design process.

In conclusion, this study underscores the limitations of relying solely on demographic factors for predicting trust in AI language models like ChatGPT. As AI services have become more widespread and familiar, the significance of demographic differences in trust seems to have diminished. Instead, prior AI experience emerges as a more influential predictor of trust, though potential gender-specific disparities in trust in AI systems may still exist.

As a result, our research recommends that AI practitioners prioritize gaining insights into users' past AI experiences when assessing trust levels. Implementing AI-specific educational strategies to enhance user confidence in proficient AI use can significantly enhance the overall trustworthiness of AI systems. These strategies can also help address potential gender-specific disparities in trust in AI systems.

References

  1. Altman, S, Twitter, https://twitter.com/sama/status/1599668808285028353, 2022.12.15.
  2. Cramer, H., Evers, V., Ramlal, S., Van Someren, M., Rutledge, L., Stash, N., & Wielinga, B., "The effects of transparency on trust in and acceptance of a content-based art recommender," User Modeling and User-Adapted Interaction, Vol.18, No.5, 2008, pp.455-496. https://doi.org/10.1007/s11257-008-9051-3
  3. Yin, M., Wortman Vaughan, J., & Wallach, H., "Understanding the effect of accuracy on trust in machine learning models," In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, pp.1-12.
  4. Yuan, S., & Lou, N. M. "Trust in artificial intelligence in customer service: Factors influencing its formation and development," International Journal of Information Management, Vol.49, 2019, pp.233-242.
  5. Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C., "Not so different after all: A cross-discipline view of trust," Academy of Management Review, Vol.29, No.3, 2004, pp.393-404.
  6. Lee, J. D., & See, K. A., "Trust in automation: Designing for appropriate reliance," Human Factors, Vol.46, No.1, 2004, pp.50-80.
  7. Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., de Visser, E. J., & Parasuraman, R., "A meta-analysis of factors affecting trust in human-robot interaction," Human Factors, Vol.53, No.5, 2011, pp.517-527. https://doi.org/10.1177/0018720811417254
  8. Hoff, K. A., & Bashir, M., "Trust in automation: Integrating empirical evidence on factors that influence trust," Human Factors, Vol.57, No.3, 2015, pp.407-434. https://doi.org/10.1177/0018720814547570
  9. Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P., "The role of trust in automation reliance," International Journal of Human-Computer Studies, Vol.58, No.6, 2003, pp.697-718. https://doi.org/10.1016/S1071-5819(03)00038-7
  10. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., & Agarwal, S., "Language models are few-shot learners," In Advances in Neural Information Processing Systems, Vol.33, 2020, pp.1877-1901.
  11. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I., "Language models are unsupervised multitask learners," OpenAI Blog, Vol.1, No.8, 2019, p.9
  12. Luger, E., & Sellen, A., "Like having a really bad PA: The Gulf between user expectation and experience of conversational agents," In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2016, pp. 5286-5297.
  13. Siau, K., & Wang, W., "Building trust in artificial intelligence, machine learning, and robotics," Cutter Business Technology Journal, Vol.31, No.2, 2018, pp.47-53.
  14. Folstad, A., & Brandtzaeg, P. B., "Chatbots and the new world of HCI Interactions," Vol.24, No.4, 2017, pp.38-42. https://doi.org/10.1145/3085558
  15. Cave, S., Dignum, V., & Meyer, J., "The ethics of AI: Issues, developments, and recommendations," In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 2018, pp. 1-7.
  16. 김효정, "ChatGPT의 특성이 사용의도에 미치는 영향에 관한 연구: 교사의 디지털 기술 조절효과를 중심으로," (사)디지털산업정보학회 논문지, 제19권, 제2호, 2023, pp.135-145.
  17. Devadas Menon, K Shilpa, "Chatting with ChatGPT: Analyzing the factors influencing users' intention to Use the Open AI's ChatGPT using the UTAUT model," Heliyon, Vol.9, No.11, 2023, pp.127-143. https://doi.org/10.1016/j.heliyon.2023.e20962
  18. Kreps S, George J, Lushenko P, Rao A., "Exploring the artificial intelligence Trust paradox: Evidence from a survey experiment in the United States," PLOS ONE, Vol.18, No.7, 2023
  19. Gillath, O., Ai, T., Branicky, M. S., Keshmiri, S., Davison, R. B., & Spaulding, R., "Attachment and trust in artificial intelligence," Computers in Human Behavior, Vol.115, 2021, p.106607.
  20. Czaja, S. J., Charness, N., Fisk, A. D., Hertzog, C., Nair, S. N., Rogers, W. A., & Sharit, J., "Factors predicting the use of technology: Findings from the Center for Research and Education on Aging and Technology Enhancement (CREATE)," Psychology and Aging, Vol.21, No.2, 2006, pp.333-352. https://doi.org/10.1037/0882-7974.21.2.333
  21. Shandilya, E., & Fan, M., "Understanding Older Adults' Perceptions and Challenges in Using AI-enabled Everyday Technologies," arXiv, 2022.
  22. Gillespie, N., Lockey, S., Curtis, C., Pool, J., & Akbari, A. "Trust in Artificial Intelligence: A Global Study," The University of Queensland and KPMG Australia, 2023
  23. Hillesheim, A. J., Rusnock, C. F., Bindewald, J. M., & Miller, M. E., "Relationships between User Demographics and User Trust in an Autonomous Agent. Proceedings of the Human Factors and Ergonomics Society Annual Meeting," Vol.61, No.1, 2017, pp.314-318.
  24. Kizilcec, R. F. "How much information?: Effects of transparency on trust in an algorithmic interface," In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2016, pp. 2390-2395.
  25. Jackson, K. M., Sharp, W. H., & Shaw, T. H., "Modeling the Impacts of Positive Interaction Frequency on Subjective Trust in an Autonomous Agent: A Linear Mixed Model Approach," Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol.66, No.1, 2022, pp.798-801.
  26. Bach, T. A., Khan, A., Hallock, H., Beltrao, G., & Sousa, S. "A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective," International Journal of Human-Computer Interaction, 2022, pp.1-16.
  27. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D., "User acceptance of information technology: Toward a unified view," MIS Quarterly, Vol.27, No.3, 2003, pp.425-478.
  28. Sheridan, T. B., "Individual differences in attributes of trust in automation: Measurement and application to system design," Frontiers in Psychology, Vol.10, 2019, p.1117.
  29. Muir, B. M., "Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems," Ergonomics, Vol.37, No.11, 1994, pp.1905-1922. https://doi.org/10.1080/00140139408964957
  30. Parasuraman, R., Sheridan, T. B., & Wickens, C. D., "Situation awareness, mental workload, and trust in automation: Viable, empirically supported cognitive engineering constructs," Journal of Cognitive Engineering and Decision Making, Vol.2, No.2, 2008, pp.140-160. https://doi.org/10.1518/155534308X284417
  31. Castelfranchi, C., & Falcone, R., "Trust theory: A socio-cognitive and computational model," John Wiley & Sons, 2010, p.147-190.
  32. Jian, J. Y., Bisantz, A. M., and Drury, C. G., "Foundations for an empirically determined scale of trust in automated systems," Int. J. Cogn. Ergon, Vol.4, 2000, pp.53-71. https://doi.org/10.1207/S15327566IJCE0401_04
  33. Meinald T. Thielsch & Gerrit Hirschfeld, "Facets of Website Content, Human-Computer Interaction," Vol.34, No.4, 2019, pp.279-327.
  34. 박상우, "인공지능 역량 함양을 위한 경험학습 기반 교육에 관한 고찰," (사)디지털산업정보학회논문지, 제19권, 제1호, 2023, pp.153-172.