References
- 김상우, 김광호, 장영혜 (2009). 의사의 전문성과 커뮤니케이션 능력이 환자가 지각하는 신뢰, 가치 및 만족에 미치는 영향. 마케팅논집, 17(1), 115-140.
- 김시내, 손영우 (2020). 인공지능 기술의 수용성에 미치는 공정세상믿음의 효과. 한국심리학회지: 일반, 39(4), 517-542. https://doi.org/10.22257/KJP.2020.12.39.4.517
- 류정원, 손권상, 윤혜선 (2023). EU GDPR 위반사례 토픽 분석 및 시사점 연구: 금융, 의료, 산업 및 상거래 부문을 중심으로. 한국전자거래학회지, 28(3), 1-25.
- 문건두, 김경재 (2023). 소셜 네트워크 분석과 토픽 모델링을 활용한 설명 가능 인공지능 연구 동향 분석. Journal of Information Technology Applications & Management, 30(1), 53-70.
- 유우새, 정창원 (2024). AI 리터러시가 인공지능 의존에 미치는 영향: 주관적 규범과 회복탄력성의 병력다중매개효과를 중심으로. 커뮤니케이션디자인학연구, 86, 362-377.
- 이은지, 이종민, 성용준 (2019). 사용자 특성과 기기 장치에 따른 가상개인비서 만족도: 기능적, 정서적 만족도 중심으로. 한국심리학회지: 소비자-광고, 20(1), 31-54.
- Arkes, H. R., Shaffer, V. A., & Medow, M. A. (2007). Patients derogate physicians who use a computer-assisted diagnostic aid. Medical Decision Making, 27(2), 189-202. https://doi.org/10.1177/0272989X06297391
- Baird, A., & Maruping, L. M. (2021). The next generation of research on IS use: A theoretical framework of delegation to and from agentic IS artifacts. MIS Quarterly, 45(1), 315-341. https://doi.org/10.25300/MISQ/2021/15882
- Belanche, D., Casalo, L. V., & Flavian, C. (2019). Artificial intelligence in fintech: Understanding robo-advisors adoption among customers. Industrial Management & Data Systems, 119(7), 1411-1430. https://doi.org/10.1108/IMDS-08-2018-0368
- Berger, B., Adam, M., Ruhr, A., & Benlian, A. (2020). Watch me improve-algorithm aversion and demonstrating the ability to learn. Business & Information Systems Engineering, 63(1), 55-68.
- Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34. https://doi.org/10.1016/j.cognition.2018.08.003
- Bogert, E., Schecter, A., & Watson, R. T. (2021). Humans rely more on algorithms than social influence as a task becomes more difficult. Scientific Reports, 11(1), 8028.
- Bonnefon, J. F., & Rahwan, I. (2020). Machine thinking, fast and slow. Trends in Cognitive Sciences, 24(12), 1019-1027. https://doi.org/10.1016/j.tics.2020.09.007
- Bove, L. L. (2019). Empathy for service: Benefits, unintended consequences, and future research agenda. Journal of Services Marketing, 33(1), 31-43. https://doi.org/10.1108/JSM-10-2018-0289
- Burleigh, T. J., Schoenherr, J. R., & Lacroix, G. L. (2013). Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces. Computers in Human Behavior, 29(3), 759-771. https://doi.org/10.1016/j.chb.2012.11.021
- Burton, J. W., Stein, M., & Jensen, T. B. (2019). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220-239. https://doi.org/10.1002/bdm.2155
- Castelo, N. (2019). Blurring the line between human and machine: Marketing artificial intelligence. ProQuest Dissertations & Theses, Columbia University, NY.
- Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809-825.
- Clerwall, C. (2014). Enter the robot journalist: Users' perceptions of automated content. Journalism Practice, 8(5), 519-531. https://doi.org/10.1080/17512786.2014.883116
- Diab, D. L., Pui, S. Y., Yankelevich, M., & Highhouse, S. (2011). Lay perceptions of selection decision aids in US and non-us samples. International Journal of Selection and Assessment, 19(2), 209-216. https://doi.org/10.1111/j.1468-2389.2011.00548.x
- Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126. https://doi.org/10.1037/xge0000033
- Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155-1170. https://doi.org/10.1287/mnsc.2016.2643
- Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697-718.
- Eastwood, J., Snook, B., & Luther, K. (2011). What people want from their professionals: Attitudes toward decision-making strategies. Journal of Behavioral Decision Making, 25(5), 458-468. https://doi.org/10.1002/bdm.741
- Edwards, C., Edwards, A., Spence, P. R., & Shelton, A. K. (2014). Is that a bot running the social media feed? Testing the differences in perceptions of communication quality for a human agent and a bot agent on Twitter. Computers in Human Behavior, 33, 372-376. https://doi.org/10.1016/j.chb.2013.08.013
- Gaube, S., Suresh, H., Raue, M., Merritt, A., Berkowitz, S. J., Lermer, E., Coughlin, J. F., Guttag, J. V., Colak, E., & Ghassemi, M. (2021). Do as AI say: Susceptibility in deployment of clinical decision-aids. Npj Digital Medicine, 4(1), 31.
- Glejser, H., & Heyndels, B. (2001). Efficiency and inefficiency in the ranking in competitions: The case of the Queen Elisabeth Music Contest. Journal of Cultural Economics, 25(2), 109-129.
- Garvey, A. M., Kim, T., & Duhachek, A. (2021). Bad news? Send an AI. good news? Send a human. Journal of Marketing, 87(1), 10-25. https://doi.org/10.1177/00222429211066972
- Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619-619. https://doi.org/10.1126/science.1134475
- Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125-130. https://doi.org/10.1016/j.cognition.2012.06.007
- Hankins, E., Nettel, P. F., Martinescu, L., Grau, G., & Rahim, S. (2023). Government AI readiness index 2023 (S. Rahim, Ed.; pp. 1-53). Oxford Insights.
- Harris-Watson, A. M., Larson, L. E., Lauharatanahirun, N., DeChurch, L. A., & Contractor, N. S. (2023). Social perception in human-AI teams: Warmth and competence predict receptivity to AI teammates. Computers in Human Behavior, 145, 107765.
- Haslam, N. (2006). Dehumanization: An integrative review. Personality and Social Psychology Review, 10(3), 252-264. https://doi.org/10.1207/s15327957pspr1003_4
- Haslam, N. (2007). Humanising medical practice: The role of empathy. Medical Journal of Australia, 187(7), 381-382. https://doi.org/10.5694/j.1326-5377.2007.tb01305.x
- Haslam, N., Bain, P., Douge, L., Lee, M., & Bastian, B. (2005). More human than you: Attributing humanness to self and others. Journal of Personality and Social Psychology, 89(6), 937-950. https://doi.org/10.1037/0022-3514.89.6.937
- Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (2nd ed). 370 Seventh Ave, New York: Guilford Publications.
- Hong, J.-W., Cruz, I., & Williams, D. (2021). AI, you can drive my car: How we evaluate human drivers vs. self-driving cars. Computers in Human Behavior, 125, 106944.
- Hu, B., Mao, Y., & Kim, K. J. (2023). How social anxiety leads to problematic use of conversational AI: The roles of loneliness, rumination, and mind perception. Computers in Human Behavior, 145, 107760.
- Huang, L., Lu, Z., & Rajagopal, P. (2022). Numbers not lives: AI dehumanization undermines COVID-19 preventive intentions. Journal of the Association for Consumer Research, 7(1), 63-71. https://doi.org/10.1086/711839
- Jago, A. S. (2019). Algorithms and authenticity. Academy of Management Discoveries, 5(1), 38-56. https://doi.org/10.5465/amd.2017.0002
- Jo, M.-S. (2000). Controlling social-desirability bias via method factors of direct and indirect questioning in structural equation models. Psychology and Marketing, 17(2), 137-148. https://doi.org/10.1002/(SICI)1520-6793(200002)17:2<137::AID-MAR5>3.0.CO;2-V
- Jones, N. (2017). How machine learning could help to improve climate forecasts. Nature, 548(7668), 379-379. https://doi.org/10.1038/548379a
- Jung, J., Song, H., Kim, Y., Im, H., & Oh, S. (2017). Intrusion of software robots into journalism: The public's and journalists' perceptions of news written by algorithms and human journalists. Computers in Human Behavior, 71, 291-298. https://doi.org/10.1016/j.chb.2017.02.022
- Jussupow, E., Benbasat, I., & Heinzl, A. (2020). Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion. In Proceedings of the 28th European Conference on Information Systems (ECIS), 168, 1-16.
- Jussupow, E., Spohrer, K., Heinzl, A., & Gawlitza, J. (2021). Augmenting medical diagnosis decisions? An investigation into physicians' decision-making process with artificial intelligence. Information Systems Research, 32(3), 713-735. https://doi.org/10.1287/isre.2020.0980
- Kerasidou, A., & Horn, R. (2016). Making space for empathy: Supporting doctors in the emotional labour of clinical care. BMC Medical Ethics, 17(1), 1-5. https://doi.org/10.1186/s12910-015-0083-z
- Khan, H., Sararueangpong, P., Mathmann, F., & Wang, D. (2023). Consumers' promotion focus mitigates the negative effects of chatbots on purchase likelihood. Journal of Consumer Behaviour, 23(3), 1528-1539.
- Kim, D., Kim, M., Baek, K., Lee, J., & Cho, H. (2021). A study on the effective operation of the qualification system for financial professionals in Korea: Focusing on the connection with qualification for financial planning. Financial Planning Review, 14(2), 89-114. https://doi.org/10.36029/FPR.2021.05.14.2.89
- Kim, J., Giroux, M., & Lee, J. C. (2021). When do you trust AI? The effect of number presentation detail on consumer trust and acceptance of AI recommendations. Psychology & Marketing, 38(7), 1140-1155. https://doi.org/10.1002/mar.21498
- Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 205395171875668.
- Lee, M. K., & Baykal, S. (2017). Algorithmic mediation in group decisions. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, 1035-1048.
- Leyer, M. and S. Schneider. (2019). Me, you or Ai? How do we feel about delegation. In Proceedings of the 27th European Conference on Information Systems (ECIS).
- Liang, H., Tsui, B. Y., Ni, H., Valentim, C. C. S., Baxter, S. L., Liu, G., Cai, W., Kermany, D. S., Sun, X., Chen, J., He, L., Zhu, J., Tian, P., Shao, H., Zheng, L., Hou, R., Hewett, S., Li, G., Liang, P., & Zang, X. (2019). Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence. Nature Medicine, 25(3), 433-438. https://doi.org/10.1038/s41591-018-0335-9
- Lieberman, H. (1997). Autonomous interface agents. In CHI '97: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, 67-74.
- Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90-103. https://doi.org/10.1016/j.obhdp.2018.12.005
- Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629-650. https://doi.org/10.1093/jcr/ucz013
- Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Marketing Science, 38(6), 937-947.
- Malle, B. F., & Magar, S. T. (2017). What kind of mind do I want in my robot?: Developing a measure of desired mental capacities in social robots? In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 195-196.
- Malle, B. F., & Zhao, X. (2023). The now and future of social robots as depictions. The Behavioral and Brain Sciences, 46, e39.
- McLean, G., & Osei-Frimpong, K. (2019). Hey Alexa ... examine the variables influencing the use of artificial intelligent in-home voice assistants. Computers in Human Behavior, 99, 28-37. https://doi.org/10.1016/j.chb.2019.05.009
- Mori, M., MacDorman, K., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19(2), 98-100.
- Nederhof, A. J. (1985). Methods of coping with social desirability bias: A review. European Journal of Social Psychology, 15(3), 263-280. https://doi.org/10.1002/ejsp.2420150303
- Niszczota, P., & Kaszas, D. (2020). Robo-investment aversion. PLoS ONE, 15(9), e0239277.
- Page, L., & Page, K. (2010). Last shall be first: A field study of biases in sequential performance evaluation on the idol series. Journal of Economic Behavior & Organization, 73(2), 186-198. https://doi.org/10.1016/j.jebo.2009.08.012
- Palmeira, M., & Spassova, G. (2015). Consumer reactions to professionals who use decision aids. European Journal of Marketing, 49(3/4), 302-326. https://doi.org/10.1108/EJM-07-2013-0390
- Pezzo, M. V., & Beckstead, J. W. (2020). Algorithm aversion is too often presented as though it were non-compensatory: A reply to Longoni et al. (2020). Judgment and Decision Making, 15(3), 449-451. https://doi.org/10.1017/S1930297500007245
- Promberger, M., & Baron, J. (2006). Do patients trust computers? Journal of Behavioral Decision Making, 19(5), 455-468. https://doi.org/10.1002/bdm.542
- Schulte Steinberg, A. L., & Hohenberger, C. (2023). Can AI close the gender gap in the job market? Individuals' preferences for AI evaluations. Computers in Human Behavior Reports, 10, 100287.
- Seyama, J., & Nagayama, R. S. (2007). The uncanny valley: Effect of realism on the impression of artificial human faces. Presence: Teleoperators and Virtual Environments, 16(4), 337-351. https://doi.org/10.1162/pres.16.4.337
- Shallow, C., Iliev, R., & Medin, D. (2011). Trolley problems in context. Judgment and Decision Making, 6(7), 593-601. https://doi.org/10.1017/S1930297500002631
- Sundar, S. S., & Kim, J. (2019). Machine heuristic: When we trust computers more than humans with our personal information. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 538, 1-9.
- van Esch, P., Black, J. S., & Ferolie, J. (2019). Marketing AI recruitment: The next phase in job application and selection. Computers in Human Behavior, 90(1), 215-222. https://doi.org/10.1016/j.chb.2018.09.009
- Wiafe, I., Koranteng, F. N., Obeng, E. N., Assyne, N., Wiafe, A., & Gulliver, S. R. (2020). Artificial intelligence for cybersecurity: A systematic mapping of literature. IEEE Access, 8, 146598-146612. https://doi.org/10.1109/ACCESS.2020.3013145
- Wieseke, J., Geigenmuller, A., & Kraus, F. (2012). On the role of empathy in customer-employee interactions. Journal of Service Research, 15(3), 316-331. https://doi.org/10.1177/1094670512439743
- Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2020). Robots at work: People prefer-and forgive-service robots with perceived feelings. Journal of Applied Psychology, 106(10), 1557-1572.
- Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of recommendations. Journal of Behavioral Decision Making, 32(4), 403-414. https://doi.org/10.1002/bdm.2118
- Youyou, W., Kosinski, M., & Stillwell, D. (2015). Computer-based personality judgments are more accurate than those made by humans. Proceedings of the National Academy of Sciences, 112(4), 1036-1040. https://doi.org/10.1073/pnas.1418680112
- Zhang, L., Pentina, I., & Fan, Y. (2021). Who do you choose? Comparing perceptions of human vs robo-advisor in the context of financial services. Journal of Services Marketing, 35(5), 634-646.
- Chui, M., Hazan, E., Roberts, R., Singla, A., Smaje, K., Sukharevsky, A., Yee, L., & Zemmel, R. (2023, June). The economic potential of generative AI: The next producti vity frontier. McKinsey & Company. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#industry-impacts
- Durth, S., Hancock, B., Maor, D., & Sukharevsky, A. (2023, September). The organization of the future: Enable d by gen AI, driven by people. McKinsey & Company. https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-organization-of-the-future-enabled-by-gen-ai-driven-by-people#/
- Grady, D. (2019, May 20). A.I. took a test to detect lung cancer. It got an A. The New York Times. https://www.nytimes.com/2019/05/20/health/cancer-artificial-intelligence-ct-scans.html
- IFR. (2023, September). World Robotics 2023. IFR. https://ifr.org/img/worldrobotics/2023_WR_extended_version.pdf
- Lee, G. (2022, March 18). AI in healthcare: What now after Watson? ERP Today, https://erp.today/ai-in-healthcare-what-now-after-watson/
- Oxford Insights. (2023). Government AI Readiness Index 2023. Oxford Insights, https://oxfordinsights.com/ai-readiness/ai-readiness-index/
- Statista. (2023). Artificial Intelligence: In-depth market analysis. In Statista (pp. 90-104). https://www.statista.com/study/50485/in-depth-report-artificial-intelligence/
- Statista. (2024). Artificial intelligence (AI): Statistics report on artificial intelligence (AI) worldwide. In Statista (pp. 5-27). https://www.statista.com/study/38609/artificial-intelligence-ai-statista-dossier/
- Thomson Reuters. (2023). Stepping into the future: How generative AI will help lawyers improve legal service delivery (pp. 1-15). https://legal.thomsonreuters.com/content/dam/ewp-m/documents/legal/en/pdf/reports/lawyers-harnessing-power-of-ai.pdf