• 제목/요약/키워드: AI Reliability

검색결과 207건 처리시간 0.023초

Roadmap Toward Certificate Program for Trustworthy Artificial Intelligence

  • Han, Min-gyu;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • 제10권3호
    • /
    • pp.59-65
    • /
    • 2021
  • In this paper, we propose the AI certification standardization activities for systematic research and planning for the standardization of trustworthy artificial intelligence (AI). The activities will be in two-fold. In the stage 1, we investigate the scope and possibility of standardization through AI reliability technology research targeting international standards organizations. And we establish the AI reliability technology standard and AI reliability verification for the feasibility of the AI reliability technology/certification standards. In the stage 2, based on the standard technical specifications established in the previous stage, we establish AI reliability certification program for verification of products, systems and services. Along with the establishment of the AI reliability certification system, a global InterOp (Interoperability test) event, an AI reliability certification international standard meetings and seminars are to be held for the spread of AI reliability certification. Finally, TAIPP (Trustworthy AI Partnership Project) will be established through the participation of relevant standards organizations and industries to overall maintain and develop standards and certification programs to ensure the governance of AI reliability certification standards.

채용 전형에서 인공지능 기술 도입이 입사 지원의도에 미치는 영향 (The Impact of Artificial Intelligence Adoption in Candidates Screening and Job Interview on Intentions to Apply)

  • 이환우;이새롬;정경철
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제28권2호
    • /
    • pp.25-52
    • /
    • 2019
  • Purpose Despite the recent increase in the use of selection tools using artificial intelligence (AI), far less is known about the effectiveness of them in recruitment and selection research. Design/methodology/approach This paper tests the impact of AI-based initial screening and interview on intentions to apply. We also examine the moderating role of individual difference (i.e., reliability on technology) in the relationship. Findings Using policy-capturing with undergraduate students at a large university in South Korea, this study showed that AI-based interview has a negative effect on intentions to apply, where AI-based initial screening has no effect. These results suggest that applicants may have a negative feeling of AI-based interview, but they may not AI-based initial screening. In other words, AI-based interview can reduce application rates, but AI-based screening not. Results also indicated that the relationship between AI-based initial screening and intentions to apply is moderated by the level of applicant's reliability on technology. Specifically, respondents with high levels of reliability are more likely than those with low levels of reliability to apply for firms using AI-based initial screening. However, the moderating role of reliability was not significant in the relationship between the AI interview and the applying intention. Employing uncertainty reduction theory, this study indicated that the relationship between AI-based selection tools and intentions to apply is dynamic, suggesting that organizations should carefully manage their AI-based selection techniques throughout the recruitment and selection process.

융복합 시대에 일부 보건계열 전공 학생들의 의료용 인공지능에 대한 기대도 (The Expectation of Medical Artificial Intelligence of Students Majoring in Health in Convergence Era)

  • 문자영;심선주
    • 한국융합학회논문지
    • /
    • 제9권9호
    • /
    • pp.97-104
    • /
    • 2018
  • 본 연구는 보건계열 전공 학생들의 의료용 인공지능에 대한 기대도를 조사하여 의료용 인공지능의 보건의료영역에서의 전반적 활용을 위한 기초자료로 이용하고자 충청남도 천안시에 소재한 일개 대학교 보건계열 전공 대학생들 500명을 대상으로 인공지능에 대한 인지도와 의료용 인공지능에 대한 신뢰도 및 활용에 대한 기대도를 조사하였다. 의료용 인공지능에 대한 인지도는 대상자의 18.6%가 높다고 응답하였고, 의료용 인공지능에 대해 신뢰도는 대상자의 24.8%가 높다고 응답하였으며 의료용 인공지능의 활용에 대한 찬성은 대상자의 38%가 그렇다고 응답하였다. 또한, 인공지능에 대한 인지도와 신뢰도가 높을수록 인공지능의 보건의료 활용에 대한 기대도도 높게 조사되었다. 이상의 결과로 볼 때 전공과정에서의 의료용 인공지능에 대한 교육은 인공지능에 대한 인지도와 신뢰도 및 기대도를 제고시켜 의료용 인공지능을 활용하는 효율적인 보건의료 환경 조성에 초석이 될 것으로 사료된다.

A reliable intelligent diagnostic assistant for nuclear power plants using explainable artificial intelligence of GRU-AE, LightGBM and SHAP

  • Park, Ji Hun;Jo, Hye Seon;Lee, Sang Hyun;Oh, Sang Won;Na, Man Gyun
    • Nuclear Engineering and Technology
    • /
    • 제54권4호
    • /
    • pp.1271-1287
    • /
    • 2022
  • When abnormal operating conditions occur in nuclear power plants, operators must identify the occurrence cause and implement the necessary mitigation measures. Accordingly, the operator must rapidly and accurately analyze the symptom requirements of more than 200 abnormal scenarios from the trends of many variables to perform diagnostic tasks and implement mitigation actions rapidly. However, the probability of human error increases owing to the characteristics of the diagnostic tasks performed by the operator. Researches regarding diagnostic tasks based on Artificial Intelligence (AI) have been conducted recently to reduce the likelihood of human errors; however, reliability issues due to the black box characteristics of AI have been pointed out. Hence, the application of eXplainable Artificial Intelligence (XAI), which can provide AI diagnostic evidence for operators, is considered. In conclusion, the XAI to solve the reliability problem of AI is included in the AI-based diagnostic algorithm. A reliable intelligent diagnostic assistant based on a merged diagnostic algorithm, in the form of an operator support system, is developed, and includes an interface to efficiently inform operators.

설명 가능한 인공지능(XAI)을 활용한 침입탐지 신뢰성 강화 방안 (The Enhancement of intrusion detection reliability using Explainable Artificial Intelligence(XAI))

  • 정일옥;최우빈;김수철
    • 융합보안논문지
    • /
    • 제22권3호
    • /
    • pp.101-110
    • /
    • 2022
  • 다양한 분야에서 인공지능을 활용한 사례가 증가하면서 침입탐지 분야 또한 다양한 이슈를 인공지능을 통해 해결하려는 시도가 증가하고 있다. 하지만, 머신러닝을 통한 예측된 결과에 관한 이유를 설명하거나 추적할 수 없는 블랙박스 기반이 대부분으로 이를 활용해야 하는 보안 전문가에게 어려움을 주고 있다. 이러한 문제를 해결하고자 다양한 분야에서 머신러닝의 결정을 해석하고 이해하는데 도움이 되는 설명 가능한 AI(XAI)에 대한 연구가 증가하고 있다. 이에 본 논문에서는 머신러닝 기반의 침입탐지 예측 결과에 대한 신뢰성을 강화하기 위한 설명 가능한 AI를 제안한다. 먼저, XGBoost를 통해 침입탐지 모델을 구현하고, SHAP을 활용하여 모델에 대한 설명을 구현한다. 그리고 기존의 피처 중요도와 SHAP을 활용한 결과를 비교 분석하여 보안 전문가가 결정을 수행하는데 신뢰성을 제공한다. 본 실험을 위해 PKDD2007 데이터셋을 사용하였으며 기존의 피처 중요도와 SHAP Value에 대한 연관성을 분석하였으며, 이를 통해 SHAP 기반의 설명 가능한 AI가 보안 전문가들에게 침입탐지 모델의 예측 결과에 대한 신뢰성을 주는데 타당함을 검증하였다.

A Study on Factors Influencing AI Learning Continuity : Focused on Business Major Students

  • 박소현
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제32권4호
    • /
    • pp.189-210
    • /
    • 2023
  • Purpose This study aims to investigate factors that positively influence the continuous Artificial Intelligence(AI) Learning Continuity of business major students. Design/methodology/approach To evaluate the impact of AI education, a survey was conducted among 119 business-related majors who completed a software/AI course. Frequency analysis was employed to examine the general characteristics of the sample. Furthermore, factor analysis using Varimax rotation was conducted to validate the derived variables from the survey items, and Cronbach's α coefficient was used to measure the reliability of the variables. Findings Positive correlations were observed between business major students' AI Learning Continuity and their AI Interest, AI Awareness, and Data Analysis Capability related to their majors. Additionally, the study identified that AI Project Awareness and AI Literacy Capability play pivotal roles as mediators in fostering AI Learning Continuity. Students who acquired problem-solving skills and related technologies through AI Projects Awareness showed increased motivation for AI Learning Continuity. Lastly, AI Self-Efficacy significantly influences students' AI Learning Continuity.

ETRI AI 실행전략 7: AI로 인한 기술·사회적 역기능 방지 (ETRI AI Strategy #7: Preventing Technological and Social Dysfunction Caused by AI)

  • 김태완;최새솔;연승준
    • 전자통신동향분석
    • /
    • 제35권7호
    • /
    • pp.67-76
    • /
    • 2020
  • Because of the development and spread of artificial intelligence (AI) technology, new security threats and adverse AI functions have emerged as a real problem in the process of diversifying areas of use and introducing AI-based products and services to users. In response, it is necessary to develop new AI-based technologies in the field of information protection and security. This paper reviews topics such as domestic and international trends on false information detection technology, cyber security technology, and trust distribution platform technology, and it establishes the direction of the promotion of technology development. In addition, the development of international trends in ethical AI guidelines to ensure the human-centered ethical validity of AI development processes and final systems in parallel with technology development are analyzed and discussed. ETRI has developed AI policing technology, information protection, and security technologies as well as derived tasks and implementation strategies to prepare ethical AI development guidelines to ensure the reliability of AI based on its capabilities.

Clinical Validation of a Deep Learning-Based Hybrid (Greulich-Pyle and Modified Tanner-Whitehouse) Method for Bone Age Assessment

  • Kyu-Chong Lee;Kee-Hyoung Lee;Chang Ho Kang;Kyung-Sik Ahn;Lindsey Yoojin Chung;Jae-Joon Lee;Suk Joo Hong;Baek Hyun Kim;Euddeum Shim
    • Korean Journal of Radiology
    • /
    • 제22권12호
    • /
    • pp.2017-2025
    • /
    • 2021
  • Objective: To evaluate the accuracy and clinical efficacy of a hybrid Greulich-Pyle (GP) and modified Tanner-Whitehouse (TW) artificial intelligence (AI) model for bone age assessment. Materials and Methods: A deep learning-based model was trained on an open dataset of multiple ethnicities. A total of 102 hand radiographs (51 male and 51 female; mean age ± standard deviation = 10.95 ± 2.37 years) from a single institution were selected for external validation. Three human experts performed bone age assessments based on the GP atlas to develop a reference standard. Two study radiologists performed bone age assessments with and without AI model assistance in two separate sessions, for which the reading time was recorded. The performance of the AI software was assessed by comparing the mean absolute difference between the AI-calculated bone age and the reference standard. The reading time was compared between reading with and without AI using a paired t test. Furthermore, the reliability between the two study radiologists' bone age assessments was assessed using intraclass correlation coefficients (ICCs), and the results were compared between reading with and without AI. Results: The bone ages assessed by the experts and the AI model were not significantly different (11.39 ± 2.74 years and 11.35 ± 2.76 years, respectively, p = 0.31). The mean absolute difference was 0.39 years (95% confidence interval, 0.33-0.45 years) between the automated AI assessment and the reference standard. The mean reading time of the two study radiologists was reduced from 54.29 to 35.37 seconds with AI model assistance (p < 0.001). The ICC of the two study radiologists slightly increased with AI model assistance (from 0.945 to 0.990). Conclusion: The proposed AI model was accurate for assessing bone age. Furthermore, this model appeared to enhance the clinical efficacy by reducing the reading time and improving the inter-observer reliability.

How Trust in Human-like AI-based Service on Social Media Will Influence Customer Engagement: Exploratory Research to Develop the Scale of Trust in Human-like AI-based Service

  • Jin Jingchuan;Shali Wu
    • Asia Marketing Journal
    • /
    • 제26권2호
    • /
    • pp.129-144
    • /
    • 2024
  • This research is on how people's trust in human-like AI-based service will influence customer engagement (CE). This study will discuss the relationship between trust and CE and explore how people's trust in AI affects CE when they lack knowledge of the company/brand. Items from the philosophical study of trust were extracted to build a scale suitable for trust in AI. The scale's reliability was ensured, and six components of trust in AI were merged into three dimensions: trust based on Quality Assurance, Risk-taking, and Corporate Social Responsibility. Trust based on quality assurance and risk-taking is verified to positively impact customer engagement, and the feelings about AI-based service fully mediate between all three dimensions of trust in AI and CE. The new trust scale for human-like AI-based services on social media sheds light on further research. The relationship between trust in AI and CE provides a theoretical basis for subsequent research.

AI 기반 보안관제의 문제점 고찰 (A Study on the Problems of AI-based Security Control)

  • 안중현;최영렬;백남균
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.452-454
    • /
    • 2021
  • 현재 보안관제 시장은 AI기술을 기반으로 하여 운영 중이다. AI를 사용하는 이유는 보안장비간 대량으로 발생하는 로그와 빅데이터에 대해 이를 탐지하기 위해 사용하고, 시간적인 문제와 인적인 문제를 완화하기 위해서 이다. 하지만 AI를 적용함에도 문제는 여전히 발생하고 있는 중이다. 보안관제 시장은 이 논문에서 소개하는 문제점 말고도 많은 문제점과 대응하고 있으며, 본 논문은 다섯 가지의 문제점을 다루고자 한다. 'AI 모델 선정', 'AI 표준화 문제', '빅데이터의 정확성 및 신뢰성', '책임소재의 문제', 'AI의 타당성 부족' 등 보안관제 환경에 AI기술을 적용함에도 발생하는 문제점을 고찰하고자 한다.

  • PDF