• Title/Summary/Keyword: Ethical AI

Search Result 90, Processing Time 0.022 seconds

A Study on Policy Instrument for the Development of Ethical AI-based Services for Enterprises: An Exploratory Analysis Using AHP (기업의 윤리적 인공지능 기반 서비스 개발을 위한 정책수단 연구: AHP를 활용한 탐색적 분석)

  • Changki Jang;MinSang Yi;WookJoon Sung
    • Journal of Information Technology Services
    • /
    • v.22 no.2
    • /
    • pp.23-40
    • /
    • 2023
  • Despite the growing interest and normative discussions on AI ethics, there is a lack of discussion on policy instruments that are necessary for companies to develop AI-based services in compliance with ethical principles. Thus, the purpose of this study is to explore policy instruments that can encourage companies to voluntarily comply with and adopt AI ethical standards and self-checklists. The study reviews previous research and similar cases on AI ethics, conducts interviews with AI-related companies, and analyzes the data using AHP to derive action plans. In terms of desirability and feasibility, Research findings show that policy instruments that induce companies to ethically develop AI-based services should be prioritized, while regulatory instruments require a cautious approach. It was also found that a consulting support policy consisting of experts in various fields who can support the use of AI ethics, and support for the development of solutions that adhere to AI ethical standards are necessary as incentive policies. Additionally, the participation and agreement of various stakeholders in the process of establishing AI ethical standards are crucial, and policy instruments need to be continuously supplemented through implementation and feedback. This study is significant as it presents the necessary policy instruments for companies to develop ethical AI-based services through an analytical methodology, moving beyond discursive discussions on AI ethical principles. Further analysis on the effectiveness of policy instruments linked to AI ethical principles is necessary for establishing ethical AI-based service development.

Navigating Ethical AI: A Comprehensive Analysis of Literature and Future Directions in Information Systems (AI와 윤리: 문헌의 종합적 분석과 정보시스템 분야의 향후 연구 방향)

  • Jinyoung Min
    • Knowledge Management Research
    • /
    • v.25 no.3
    • /
    • pp.1-22
    • /
    • 2024
  • As the use of AI becomes a reality in many aspects of daily life, the opportunities and benefits it brings are being highlighted, while concerns about the ethical issues it may cause are also increasing. The field of information systems, which studies the impact of technology on business and society, must contribute to ensuring that AI has a positive influence on human society. To achieve this, it is necessary to explore the direction of research in the information systems field by examining various studies related to AI and ethics. For this purpose, this study collected literature from 2020 to the present and analyzed their research topics through researcher coding and topic modeling methods. The analysis results categorized research topics into AI ethics principles, ethical AI design and development, ethical AI deployment and application, and ethical AI use. After reviewing the literature in each category to grasp the current state of research, this study suggested future research directions for AI ethics in the field of information systems.

Engineering Students' Ethical Sensitivity on Artificial Intelligence Robots (공학전공 대학생의 AI 로봇에 대한 윤리적 민감성)

  • Lee, Hyunok;Ko, Yeonjoo
    • Journal of Engineering Education Research
    • /
    • v.25 no.6
    • /
    • pp.23-37
    • /
    • 2022
  • This study evaluated the engineering students' ethical sensitivity to an AI emotion recognition robot scenario and explored its characteristics. For data collection, 54 students (27 majoring in Convergence Electronic Engineering and 27 majoring in Computer Software) were asked to list five factors regarding the AI robot scenario. For the analysis of ethical sensitivity, it was checked whether the students acknowledged the AI ethical principles in the AI robot scenario, such as safety, controllability, fairness, accountability, and transparency. We also categorized students' levels as either informed or naive based on whether or not they infer specific situations and diverse outcomes and feel a responsibility to take action as engineers. As a result, 40.0% of students' responses contained the AI ethical principles. These include safety 57.1%, controllability 10.7%, fairness 20.5%, accountability 11.6%, and transparency 0.0%. More students demonstrated ethical sensitivity at a naive level (76.8%) rather than at the informed level (23.2%). This study has implications for presenting an ethical sensitivity evaluation tool that can be utilized professionally in educational fields and applying it to engineering students to illustrate specific cases with varying levels of ethical sensitivity.

Issues and Trends Related to Artificial Intelligence in Research Ethics (연구윤리에서 인공지능 관련 이슈와 동향)

  • Sun-Hee Lee
    • Health Policy and Management
    • /
    • v.34 no.2
    • /
    • pp.103-105
    • /
    • 2024
  • Artificial intelligence (AI) technology is rapidly spreading across various industries. Accordingly, interest in ethical issues arising from the use of AI is also increasing. This is particularly true in the healthcare sector, where AI-related ethical issues are a significant topic due to its focus on health and life. Hence, this issue aims to examine the ethical concerns when using AI tools during research and publication processes. One of the key concerns is the potential for unintended plagiarism when researchers use AI tools for tasks such as translation, citation, and editing. Currently, as AI is not given authorship, the researcher is held accountable for any ethical problems arising from using AI tools. Researchers are advised to specify which AI tools were used and how they were employed in their research papers. As more cases of ethical issues related to AI tools accumulate, it is expected that various guidelines will be developed. Therefore, researchers should stay informed about global consensus and guidelines regarding the use of AI tools in the research and publication process.

ETRI AI Strategy #7: Preventing Technological and Social Dysfunction Caused by AI (ETRI AI 실행전략 7: AI로 인한 기술·사회적 역기능 방지)

  • Kim, T.W.;Choi, S.S.;Yeon, S.J.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.7
    • /
    • pp.67-76
    • /
    • 2020
  • Because of the development and spread of artificial intelligence (AI) technology, new security threats and adverse AI functions have emerged as a real problem in the process of diversifying areas of use and introducing AI-based products and services to users. In response, it is necessary to develop new AI-based technologies in the field of information protection and security. This paper reviews topics such as domestic and international trends on false information detection technology, cyber security technology, and trust distribution platform technology, and it establishes the direction of the promotion of technology development. In addition, the development of international trends in ethical AI guidelines to ensure the human-centered ethical validity of AI development processes and final systems in parallel with technology development are analyzed and discussed. ETRI has developed AI policing technology, information protection, and security technologies as well as derived tasks and implementation strategies to prepare ethical AI development guidelines to ensure the reliability of AI based on its capabilities.

A Study on Information Bias Perceived by Users of AI-driven News Recommendation Services: Focusing on the Establishment of Ethical Principles for AI Services (AI 자동 뉴스 추천 서비스 사용자가 인지하는 정보 편향성에 대한 연구: AI 서비스의 윤리 원칙 수립을 중심으로)

  • Minjung Park;Sangmi Chai
    • Knowledge Management Research
    • /
    • v.25 no.3
    • /
    • pp.47-71
    • /
    • 2024
  • AI-driven news recommendation systems are widely used today, providing personalized news consumption experiences. However, there are significant concerns that these systems might increase users' information bias by mainly showing information from limited perspectives. This lack of diverse information access can prevent users from forming well-rounded viewpoints on specific issues, leading to social problems like Filter bubbles or Echo chambers. These issues can deepen social divides and information inequality. This study aims to explore how AI-based news recommendation services affect users' perceived information bias and to create a foundation for ethical principles in AI services. Specifically, the study looks at the impact of ethical principles like accountability, the right to explanation, the right to choose, and privacy protection on users' perceptions of information bias in AI news systems. The findings emphasize the need for AI service providers to strengthen ethical standards to improve service quality and build user trust for long-term use. By identifying which ethical principles should be prioritized in the design and implementation of AI services, this study aims to help develop corporate ethical frameworks, internal policies, and national AI ethics guidelines.

A Conceptual Architecture for Ethic-Friendly AI

  • Oktian, Yustus-Eko;Brian, Stanley;Lee, Sang-Gon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.4
    • /
    • pp.9-17
    • /
    • 2022
  • The state-of-the-art AI systems pose many ethical issues ranging from massive data collection to bias in algorithms. In response, this paper proposes a more ethic-friendly AI architecture by combining Federated Learning(FL) and Blockchain. We discuss the importance of each issues and provide requirements for an ethical AI system to show how our solutions can achieve more ethical paradigms. By committing to our design, adopters can perform AI services more ethically.

The impact of nursing students' biomedical and artificial intelligence ethical awareness, ethical values, and professional self-concept on their ethical decision-making confidence (간호대학생의 생명의료 및 인공지능 윤리의식, 윤리적 가치관, 전문직 자아개념이 윤리적 의사결정 자신감에 미치는 영향)

  • Park, Seungmi;Jang, Insun
    • The Journal of Korean Academic Society of Nursing Education
    • /
    • v.29 no.4
    • /
    • pp.371-380
    • /
    • 2023
  • Purpose: The purpose of this study is to determine the relationship between nursing students' biomedical and artificial intelligence (AI) ethical awareness, ethical values, professional self-concept, and ethical decision-making confidence, and to then identify factors that can influence their ethical decision-making confidence. Methods: This study employed a descriptive research method and was conducted from June 8 to 12, 2023, with 204 students from three nursing colleges in Korea. The collected data were analyzed by frequency and percentage, independent t-test, Pearson's correlation coefficient, and multiple regression using IBM SPSS 23.0. Results: The results of the multiple regression analysis showed that the regression model was significant (F=18.88, p<.001) and that professional self-concept (β=.46, p<.001), ethics education (β=.23, p<.001), AI ethical awareness (β=.16, p=.020), and relativistic ethical values (β=.14, p=.035) explained 34.6% of the nursing students' ethical decision-making confidence. Conclusion: It is necessary to include professional self-concept, AI ethical awareness, and ethical values contents when constructing the curriculums of educational programs in order to improve nursing students' ethical decision-making confidence.

Ethical and Legal Implications of AI-based Human Resources Management (인공지능(AI) 기반 인사관리의 윤리적·법적 영향)

  • Jungwoo Lee;Jungsoo Lee;Ji Hun kwon;Minyi Cha;Kyu Tae Kim
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.25 no.2
    • /
    • pp.100-112
    • /
    • 2024
  • This study investigates the ethical and legal implications of utilizing artificial intelligence (AI) in human resource management, with a particular focus on AI interviews in the recruitment process. AI, defined as the capability of computer programs to perform tasks associated with human intelligence such as reasoning, learning, and adapting, is increasingly being integrated into HR practices. The deployment of AI in recruitment, specifically through AI-driven interviews, promises efficiency and objectivity but also raises significant ethical and legal concerns. These concerns include potential biases in AI algorithms, transparency in AI decision-making processes, data privacy issues, and compliance with existing labor laws and regulations. By analyzing case studies and reviewing relevant literature, this paper aims to provide a comprehensive understanding of these challenges and propose recommendations for ensuring ethical and legal compliance in AI-based HR practices. The findings suggest that while AI can enhance recruitment efficiency, it is imperative to establish robust ethical guidelines and legal frameworks to mitigate risks and ensure fair and transparent hiring practices.

The Ethics of AI in Online Marketing: Examining the Impacts on Consumer privacyand Decision-making

  • Preeti Bharti;Byungjoo Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.2
    • /
    • pp.227-239
    • /
    • 2023
  • Online marketing is a rapidly growing industry that heavily depends on digital technologies and data analysis to effectively reach and engage consumers. For that, artificial intelligence (AI) has emerged as a crucial tool for online marketers, enabling marketers to analyze extensive consumer data and automate decision-making processes. The purpose of this study was to investigate the ethical implications of using AI in online marketing, focusing on its impact on consumer privacy and decision-making. AI has created new possibilities for personalized marketing but raises concerns about the collection and use of consumer data, transparency and accountability of decision-making, and the impact on consumer autonomy and privacy. In this study, we reviewed the relevant literature and case studies to assess the potential risks and make recommendations for improving consumer protection. The findings provide insights into ethical considerations and offer a roadmap for balancing the advantages of AI in online marketing with the protection of consumer rights. Companies should consider these ethical issues when implementing AI in their marketing strategies. In this study, we explored the concerns and provided insights into the challenges posed by AI in online marketing, such as the collection and use of consumer data, transparency, and accountability of decision-making, and the impact on consumer autonomy and privacy.