• Title/Summary/Keyword: Responsible AI

Search Result 41, Processing Time 0.022 seconds

Evaluating Conversational AI Systems for Responsible Integration in Education: A Comprehensive Framework

  • Utkarch Mittal;Namjae Cho;Giseob Yu
    • Journal of Information Technology Applications and Management
    • /
    • v.31 no.3
    • /
    • pp.149-163
    • /
    • 2024
  • As conversational AI systems such as ChatGPT have become more advanced, researchers are exploring ways to use them in education. However, we need effective ways to evaluate these systems before allowing them to help teach students. This study proposes a detailed framework for testing conversational AI across three important criteria as follow. First, specialized benchmarks that measure skills include giving clear explanations, adapting to context during long dialogues, and maintaining a consistent teaching personality. Second, adaptive standards check whether the systems meet the ethical requirements of privacy, fairness, and transparency. These standards are regularly updated to match societal expectations. Lastly, evaluations were conducted from three perspectives: technical accuracy on test datasets, performance during simulations with groups of virtual students, and feedback from real students and teachers using the system. This framework provides a robust methodology for identifying strengths and weaknesses of conversational AI before its deployment in schools. It emphasizes assessments tailored to the critical qualities of dialogic intelligence, user-centric metrics capturing real-world impact, and ethical alignment through participatory design. Responsible innovation by AI assistants requires evidence that they can enhance accessible, engaging, and personalized education without disrupting teaching effectiveness or student agency.

A Study on Improving Data Poisoning Attack Detection against Network Data Analytics Function in 5G Mobile Edge Computing (5G 모바일 에지 컴퓨팅에서 빅데이터 분석 기능에 대한 데이터 오염 공격 탐지 성능 향상을 위한 연구)

  • Ji-won Ock;Hyeon No;Yeon-sup Lim;Seong-min Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.549-559
    • /
    • 2023
  • As mobile edge computing (MEC) is gaining attention as a core technology of 5G networks, edge AI technology of 5G network environment based on mobile user data is recently being used in various fields. However, as in traditional AI security, there is a possibility of adversarial interference of standard 5G network functions within the core network responsible for edge AI core functions. In addition, research on data poisoning attacks that can occur in the MEC environment of standalone mode defined in 5G standards by 3GPP is currently insufficient compared to existing LTE networks. In this study, we explore the threat model for the MEC environment using NWDAF, a network function that is responsible for the core function of edge AI in 5G, and propose a feature selection method to improve the performance of detecting data poisoning attacks for Leaf NWDAF as some proof of concept. Through the proposed methodology, we achieved a maximum detection rate of 94.9% for Slowloris attack-based data poisoning attacks in NWDAF.

The Regulation of AI: Striking the Balance Between Innovation and Fairness

  • Kwang-min Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.9-22
    • /
    • 2023
  • In this paper, we propose a balanced approach to AI regulation, focused on harnessing the potential benefits of artificial intelligence while upholding fairness and ethical responsibility. With the increasing integration of AI systems into daily life, it is essential to develop regulations that prevent harmful biases and the unfair disadvantage of certain demographics. Our approach involves analyzing regulatory frameworks and case studies in AI applications to ensure responsible development and application. We aim to contribute to ongoing discussions around AI regulation, helping to establish policies that balance innovation with fairness, thereby driving economic progress and societal advancement in the age of artificial intelligence.

Exploratory Analysis of AI-based Policy Decision-making Implementation

  • SunYoung SHIN
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.203-214
    • /
    • 2024
  • This study seeks to provide implications for domestic-related policies through exploratory analysis research to support AI-based policy decision-making. The following should be considered when establishing an AI-based decision-making model in Korea. First, we need to understand the impact that the use of AI will have on policy and the service sector. The positive and negative impacts of AI use need to be better understood, guided by a public value perspective, and take into account the existence of different levels of governance and interests across public policy and service sectors. Second, reliability is essential for implementing innovative AI systems. In most organizations today, comprehensive AI model frameworks to enable and operationalize trust, accountability, and transparency are often insufficient or absent, with limited access to effective guidance, key practices, or government regulations. Third, the AI system is accountable. The OECD AI Principles set out five value-based principles for responsible management of trustworthy AI: inclusive growth, sustainable development and wellbeing, human-centered values and fairness values and fairness, transparency and explainability, robustness, security and safety, and accountability. Based on this, we need to build an AI-based decision-making system in Korea, and efforts should be made to build a system that can support policies by reflecting this. The limiting factor of this study is that it is an exploratory study of existing research data, and we would like to suggest future research plans by collecting opinions from experts in related fields. The expected effect of this study is analytical research on artificial intelligence-based decision-making systems, which will contribute to policy establishment and research in related fields.

Applying NIST AI Risk Management Framework: Case Study on NTIS Database Analysis Using MAP, MEASURE, MANAGE Approaches (NIST AI 위험 관리 프레임워크 적용: NTIS 데이터베이스 분석의 MAP, MEASURE, MANAGE 접근 사례 연구)

  • Jung Sun Lim;Seoung Hun, Bae;Taehoon Kwon
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.47 no.2
    • /
    • pp.21-29
    • /
    • 2024
  • Fueled by international efforts towards AI standardization, including those by the European Commission, the United States, and international organizations, this study introduces a AI-driven framework for analyzing advancements in drone technology. Utilizing project data retrieved from the NTIS DB via the "drone" keyword, the framework employs a diverse toolkit of supervised learning methods (Keras MLP, XGboost, LightGBM, and CatBoost) enhanced by BERTopic (natural language analysis tool). This multifaceted approach ensures both comprehensive data quality evaluation and in-depth structural analysis of documents. Furthermore, a 6T-based classification method refines non-applicable data for year-on-year AI analysis, demonstrably improving accuracy as measured by accuracy metric. Utilizing AI's power, including GPT-4, this research unveils year-on-year trends in emerging keywords and employs them to generate detailed summaries, enabling efficient processing of large text datasets and offering an AI analysis system applicable to policy domains. Notably, this study not only advances methodologies aligned with AI Act standards but also lays the groundwork for responsible AI implementation through analysis of government research and development investments.

Applications and Concerns of Generative AI: ChatGPT in the Field of Occupational Health (산업보건분야에서의 생성형 AI: ChatGPT 활용과 우려)

  • Ju Hong Park;Seunghon Ham
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.33 no.4
    • /
    • pp.412-418
    • /
    • 2023
  • As advances in artificial intelligence (AI) increasingly approach areas once relegated to the realm of science fiction, there is growing public interest in using these technologies for practical everyday tasks in both the home and the workplace. This paper explores the applications of and implications for of using ChatGPT, a conversational AI model based on GPT-3.5 and GPT-4.0, in the field of occupational health and safety. After gaining over one million users within five days of its launch, ChatGPT has shown promise in addressing issues ranging from emergency response to chemical exposure to recommending personal protective equipment. However, despite its potential usefulness, the integration of AI into scientific work and professional settings raises several concerns. These concerns include the ethical dimensions of recognizing AI as a co-author in academic publications, the limitations and biases inherent in the data used to train these models, legal responsibilities in professional contexts, and potential shifts in employment following technological advances. This paper aims to provide a comprehensive overview of these issues and to contribute to the ongoing dialogue on the responsible use of AI in occupational health and safety.

Analysis of Environmentally Responsible Behaviors based on a Typology of Activity Involvement and Place Attachment - Focuses on Visitors to Namhansanseong Provincial Park - (활동관여-장소애착 유형에 따른 환경책임행동분석 - 남한산성 도립공원 방문객을 대상으로 -)

  • Kim, Hyun;Song, Hwasung;Kim, Yeeun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.43 no.3
    • /
    • pp.114-124
    • /
    • 2015
  • The concepts of activity involvement(AI) and place attachment(PA) are useful for explaining the sustainable use of natural resources by humans. Although several studies have investigated the effects of AI and PA on environmental behaviors and found its implications, it has not examined the simultaneous effects of both AI and PA. Thus, the purpose of this study was to develop a typology of both AI and PA. This typology was used to explain the environmentally responsible behaviors of visitors. The study sample surveyed 587 users of the main trail in Namhansanseong Provincial Park The results were analyzed by frequency, reliability, factor analysis, cross-tabulation, T-test, correlation and ANOVA analysis. As a result, the typology identified four subgroups of hikers based on involvement in hiking and attachment to setting. Results also indicate that environmentally responsible behaviors do vary significantly across typology. In detail, general environmental behavior and specific environmental behavior were significantly different between the four groups. These finding suggests that PA seems to play a more powerful role than AI in relation to environmental behavior. While more involved and more attached hikers were more active in environmental behaviors, less involved and less attached hikers had a more passive attitude. In this respect, this study placed emphasis on the fact that the future resource management of tourism and outdoor recreation may be established based on its activity experience in certain place.

A Study on the Possibility of Utilizing Artificial Intelligence for National Crisis Management: Focusing on the Management of Artificial Intelligence and R&D Cases (국가위기관리를 위한 인공지능 활용 가능성에 관한 고찰: 인공지능 운용과 연구개발 사례를 중심으로)

  • Choi, Won-sang
    • Journal of Digital Convergence
    • /
    • v.19 no.3
    • /
    • pp.81-88
    • /
    • 2021
  • Modern society is exposed to various types of crises. In particular, since the September 11 attacks, each country has been increasingly responsible for managing non-military crises. Therefore, the purpose of this study is to consider ways to utilize artificial intelligence(AI) for national crisis management in the era of the fourth industrial revolution. To this end, we analyzed the effectiveness of artificial intelligence(AI) operated and under research and development(R&D) to support human decision-making and examined the possibility of using artificial intelligence(AI) to national crisis management. As a result of the study, artificial intelligence(AI) provides objective judgment of the data-based situation and optimal countermeasures to policymakers, enabling them to make decisions in urgent crisis situations, indicating that it is efficient to use artificial intelligence(AI) for national crisis. These findings suggest the possibility of using artificial intelligence(AI) to respond quickly and efficiently to the national crisis.

Utilizing the Orange Platform for Enhancing Artificial Intelligence Education in the Department of Radiological Science at Universities (대학 방사선학과 인공지능 교육 활성화를 위한 Orange 플랫폼 이용 사례)

  • Kyoungho Choi
    • Journal of radiological science and technology
    • /
    • v.47 no.4
    • /
    • pp.255-262
    • /
    • 2024
  • Although a universally accepted definition of artificial intelligence (AI) remains elusive, the terminology has gained widespread familiarity owing to its pervasive integration across diverse domains in our daily lives. The application of AI in healthcare, notably in radiographic imaging, is no longer a matter of science fiction but a reality. Consequently, AI education has emerged as an indispensable requirement for radiological technologists responsible for the field of radiology. This paper underscores this imperative and advocates for the incorporation of AI education, using the Orange platform in university radiology department as part of the solution. Furthermore, this paper presents a case study featuring machine learning analysis using structured data on exposure doses for radiation related workers and unstructured data consisting of X-ray data encompassing 69 COVID-19-infected cases and 25 individuals with normal findings. The emphasized importance of AI education for radiology professionals in this research is expected to contribute to the job stability of radiologic practitioners in the future.

The transformative impact of large language models on medical writing and publishing: current applications, challenges and future directions

  • Sangzin Ahn
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.28 no.5
    • /
    • pp.393-401
    • /
    • 2024
  • Large language models (LLMs) are rapidly transforming medical writing and publishing. This review article focuses on experimental evidence to provide a comprehensive overview of the current applications, challenges, and future implications of LLMs in various stages of academic research and publishing process. Global surveys reveal a high prevalence of LLM usage in scientific writing, with both potential benefits and challenges associated with its adoption. LLMs have been successfully applied in literature search, research design, writing assistance, quality assessment, citation generation, and data analysis. LLMs have also been used in peer review and publication processes, including manuscript screening, generating review comments, and identifying potential biases. To ensure the integrity and quality of scholarly work in the era of LLM-assisted research, responsible artificial intelligence (AI) use is crucial. Researchers should prioritize verifying the accuracy and reliability of AI-generated content, maintain transparency in the use of LLMs, and develop collaborative human-AI workflows. Reviewers should focus on higher-order reviewing skills and be aware of the potential use of LLMs in manuscripts. Editorial offices should develop clear policies and guidelines on AI use and foster open dialogue within the academic community. Future directions include addressing the limitations and biases of current LLMs, exploring innovative applications, and continuously updating policies and practices in response to technological advancements. Collaborative efforts among stakeholders are necessary to harness the transformative potential of LLMs while maintaining the integrity of medical writing and publishing.