• Title/Summary/Keyword: Explainable AI

Search Result 60, Processing Time 0.018 seconds

Explainable & Safe Artificial Intelligence in Radiology (의료 영상 분석을 위한 설명 가능하고 안전한 인공지능)

  • Synho Do
    • Journal of the Korean Society of Radiology
    • /
    • v.85 no.5
    • /
    • pp.834-847
    • /
    • 2024
  • Artificial intelligence (AI) is transforming radiology with improved diagnostic accuracy and efficiency, but prediction uncertainty remains a critical challenge. This review examines key sources of uncertainty-out-of-distribution, aleatoric, and model uncertainties-and highlights the importance of independent confidence metrics and explainable AI for safe integration. Independent confidence metrics assess the reliability of AI predictions, while explainable AI provides transparency, enhancing collaboration between AI and radiologists. The development of zero-error tolerance models, designed to minimize errors, sets new standards for safety. Addressing these challenges will enable AI to become a trusted partner in radiology, advancing care standards and patient outcomes.

Explainable AI Application for Machine Predictive Maintenance (설명 가능한 AI를 적용한 기계 예지 정비 방법)

  • Cheon, Kang Min;Yang, Jaekyung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.4
    • /
    • pp.227-233
    • /
    • 2021
  • Predictive maintenance has been one of important applications of data science technology that creates a predictive model by collecting numerous data related to management targeted equipment. It does not predict equipment failure with just one or two signs, but quantifies and models numerous symptoms and historical data of actual failure. Statistical methods were used a lot in the past as this predictive maintenance method, but recently, many machine learning-based methods have been proposed. Such proposed machine learning-based methods are preferable in that they show more accurate prediction performance. However, with the exception of some learning models such as decision tree-based models, it is very difficult to explicitly know the structure of learning models (Black-Box Model) and to explain to what extent certain attributes (features or variables) of the learning model affected the prediction results. To overcome this problem, a recently proposed study is an explainable artificial intelligence (AI). It is a methodology that makes it easy for users to understand and trust the results of machine learning-based learning models. In this paper, we propose an explainable AI method to further enhance the explanatory power of the existing learning model by targeting the previously proposedpredictive model [5] that learned data from a core facility (Hyper Compressor) of a domestic chemical plant that produces polyethylene. The ensemble prediction model, which is a black box model, wasconverted to a white box model using the Explainable AI. The proposed methodology explains the direction of control for the major features in the failure prediction results through the Explainable AI. Through this methodology, it is possible to flexibly replace the timing of maintenance of the machine and supply and demand of parts, and to improve the efficiency of the facility operation through proper pre-control.

Text Based Explainable AI for Monitoring National Innovations (텍스트 기반 Explainable AI를 적용한 국가연구개발혁신 모니터링)

  • Jung Sun Lim;Seoung Hun Bae
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.4
    • /
    • pp.1-7
    • /
    • 2022
  • Explainable AI (XAI) is an approach that leverages artificial intelligence to support human decision-making. Recently, governments of several countries including Korea are attempting objective evidence-based analyses of R&D investments with returns by analyzing quantitative data. Over the past decade, governments have invested in relevant researches, allowing government officials to gain insights to help them evaluate past performances and discuss future policy directions. Compared to the size that has not been used yet, the utilization of the text information (accumulated in national DBs) so far is low level. The current study utilizes a text mining strategy for monitoring innovations along with a case study of smart-farms in the Honam region.

Explainable Software Employment Model Development of University Graduates using Boosting Machine Learning and SHAP (부스팅 기계 학습과 SHAP를 이용한 설명 가능한 소프트웨어 분야 대졸자 취업 모델 개발)

  • Kwon Joonhee;Kim Sungrim
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.3
    • /
    • pp.177-192
    • /
    • 2023
  • The employment rate of university graduates has been decreasing significantly recently. With the advent of the Fourth Industrial Revolution, the demand for software employment has increased. It is necessary to analyze the factors for software employment of university graduates. This paper proposes explainable software employment model of university graduates using machine learning and explainable AI. The Graduates Occupational Mobility Survey(GOMS) provided by the Korea Employment Information Service is used. The employment model uses boosting machine learning. Then, performance evaluation is performed with four algorithms of boosting model. Moreover, it explains the factors affecting the employment using SHAP. The results indicates that the top 3 factors are major, employment goal setting semester, and vocational education and training.

A Study on Human-AI Collaboration Process to Support Evidence-Based National Innovation Monitoring: Case Study on Ministry of Oceans and Fisheries (Human-AI 협력 프로세스 기반의 증거기반 국가혁신 모니터링 연구: 해양수산부 사례)

  • Jung Sun Lim;Seoung Hun Bae;Kil-Ho Ryu;Sang-Gook Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.2
    • /
    • pp.22-31
    • /
    • 2023
  • Governments around the world are enacting laws mandating explainable traceability when using AI(Artificial Intelligence) to solve real-world problems. HAI(Human-Centric Artificial Intelligence) is an approach that induces human decision-making through Human-AI collaboration. This research presents a case study that implements the Human-AI collaboration to achieve explainable traceability in governmental data analysis. The Human-AI collaboration explored in this study performs AI inferences for generating labels, followed by AI interpretation to make results more explainable and traceable. The study utilized an example dataset from the Ministry of Oceans and Fisheries to reproduce the Human-AI collaboration process used in actual policy-making, in which the Ministry of Science and ICT utilized R&D PIE(R&D Platform for Investment and Evaluation) to build a government investment portfolio.

Transforming Patient Health Management: Insights from Explainable AI and Network Science Integration

  • Mi-Hwa Song
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.307-313
    • /
    • 2024
  • This study explores the integration of Explainable Artificial Intelligence (XAI) and network science in healthcare, focusing on enhancing healthcare data interpretation and improving diagnostic and treatment methods. Key methodologies like Graph Neural Networks, Community Detection, Overlapping Network Models, and Time-Series Network Analysis are examined in depth for their potential in patient health management. The research highlights the transformative role of XAI in making complex AI models transparent and interpretable, essential for accurate, data-driven decision-making in healthcare. Case studies demonstrate the practical application of these methodologies in predicting diseases, understanding drug interactions, and tracking patient health over time. The study concludes with the immense promise of these advancements in healthcare, despite existing challenges, and underscores the need for ongoing research to fully realize the potential of AI in this field.

Speed Prediction and Analysis of Nearby Road Causality Using Explainable Deep Graph Neural Network (설명 가능 그래프 심층 인공신경망 기반 속도 예측 및 인근 도로 영향력 분석 기법)

  • Kim, Yoo Jin;Yoon, Young
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.1
    • /
    • pp.51-62
    • /
    • 2022
  • AI-based speed prediction studies have been conducted quite actively. However, while the importance of explainable AI is emerging, the study of interpreting and reasoning the AI-based speed predictions has not been carried out much. Therefore, in this paper, 'Explainable Deep Graph Neural Network (GNN)' is devised to analyze the speed prediction and assess the nearby road influence for reasoning the critical contributions to a given road situation. The model's output was explained by comparing the differences in output before and after masking the input values of the GNN model. Using TOPIS traffic speed data, we applied our GNN models for the major congested roads in Seoul. We verified our approach through a traffic flow simulation by adjusting the most influential nearby roads' speed and observing the congestion's relief on the road of interest accordingly. This is meaningful in that our approach can be applied to the transportation network and traffic flow can be improved by controlling specific nearby roads based on the inference results.

Development of a real-time prediction model for intraoperative hypotension using Explainable AI and Transformer (Explainable AI와 Transformer를 이용한 수술 중 저혈압 실시간 예측 모델 개발)

  • EunSeo Jung;Sang-Hyun Kim;Jiyoung Woo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.35-36
    • /
    • 2024
  • 전신 마취 수술 중 저혈압의 발생은 다양한 합병증을 유발하며 이를 사전에 예측하여 대응하는 것은 매우 중요한 일이다. 따라서 본 연구에서는 SHAP 모델을 통해 변수 선택을 진행하고, Transformer 모델을 이용해 저혈압 발생 여부를 예측함으로써 임상적 의사결정을 지원한다. 또한 기존 연구들과는 달리, 수술실에서 수집되는 데이터를 기반으로 하여 높은 범용성을 가진다. 비침습적 혈압 예측에서 RMSE 9.46, MAPE 4.4%를 달성하였고, 저혈압 여부를 예측에서는 저혈압 기준 F1-Score 0.75로 우수한 결과를 얻었다.

  • PDF

The Enhancement of intrusion detection reliability using Explainable Artificial Intelligence(XAI) (설명 가능한 인공지능(XAI)을 활용한 침입탐지 신뢰성 강화 방안)

  • Jung Il Ok;Choi Woo Bin;Kim Su Chul
    • Convergence Security Journal
    • /
    • v.22 no.3
    • /
    • pp.101-110
    • /
    • 2022
  • As the cases of using artificial intelligence in various fields increase, attempts to solve various issues through artificial intelligence in the intrusion detection field are also increasing. However, the black box basis, which cannot explain or trace the reasons for the predicted results through machine learning, presents difficulties for security professionals who must use it. To solve this problem, research on explainable AI(XAI), which helps interpret and understand decisions in machine learning, is increasing in various fields. Therefore, in this paper, we propose an explanatory AI to enhance the reliability of machine learning-based intrusion detection prediction results. First, the intrusion detection model is implemented through XGBoost, and the description of the model is implemented using SHAP. And it provides reliability for security experts to make decisions by comparing and analyzing the existing feature importance and the results using SHAP. For this experiment, PKDD2007 dataset was used, and the association between existing feature importance and SHAP Value was analyzed, and it was verified that SHAP-based explainable AI was valid to give security experts the reliability of the prediction results of intrusion detection models.

Explainable Credit Default Prediction Using SHAP (SHAP을 이용한 설명 가능한 신용카드 연체 예측)

  • Minjoong Kim;Seungwoo Kim;Jihoon Moon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.39-40
    • /
    • 2024
  • 본 연구는 SHAP(SHapley Additive exPlanations)을 활용하여 신용카드 사용자의 연체 가능성을 예측하는 기계학습 모델의 해석 가능성을 강화하는 방법을 제안한다. 대규모 신용카드 데이터를 분석하여, 고객의 나이, 성별, 결혼 상태, 결제 이력 등이 연체 발생에 미치는 영향을 명확히 하는 것을 목표로 한다. 본 연구를 토대로 금융기관은 더 정확한 위험 관리를 수행하고, 고객에게 맞춤형 서비스를 제공할 수 있는 기반을 마련할 수 있다.

  • PDF