• Title/Summary/Keyword: Explainable

Search Result 152, Processing Time 0.027 seconds

An Explainable Deep Learning Algorithm based on Video Classification (비디오 분류에 기반 해석가능한 딥러닝 알고리즘)

  • Jin Zewei;Inwhee Joe
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.449-452
    • /
    • 2023
  • The rapid development of the Internet has led to a significant increase in multimedia content in social networks. How to better analyze and improve video classification models has become an important task. Deep learning models have typical "black box" characteristics. The model requires explainable analysis. This article uses two classification models: ConvLSTM and VGG16+LSTM models. And combined with the explainable method of LRP, generate visualized explainable results. Finally, based on the experimental results, the accuracy of the classification model is: ConvLSTM: 75.94%, VGG16+LSTM: 92.50%. We conducted explainable analysis on the VGG16+LSTM model combined with the LRP method. We found VGG16+LSTM classification model tends to use the frames biased towards the latter half of the video and the last frame as the basis for classification.

Text Based Explainable AI for Monitoring National Innovations (텍스트 기반 Explainable AI를 적용한 국가연구개발혁신 모니터링)

  • Jung Sun Lim;Seoung Hun Bae
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.4
    • /
    • pp.1-7
    • /
    • 2022
  • Explainable AI (XAI) is an approach that leverages artificial intelligence to support human decision-making. Recently, governments of several countries including Korea are attempting objective evidence-based analyses of R&D investments with returns by analyzing quantitative data. Over the past decade, governments have invested in relevant researches, allowing government officials to gain insights to help them evaluate past performances and discuss future policy directions. Compared to the size that has not been used yet, the utilization of the text information (accumulated in national DBs) so far is low level. The current study utilizes a text mining strategy for monitoring innovations along with a case study of smart-farms in the Honam region.

Explainable Software Employment Model Development of University Graduates using Boosting Machine Learning and SHAP (부스팅 기계 학습과 SHAP를 이용한 설명 가능한 소프트웨어 분야 대졸자 취업 모델 개발)

  • Kwon Joonhee;Kim Sungrim
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.3
    • /
    • pp.177-192
    • /
    • 2023
  • The employment rate of university graduates has been decreasing significantly recently. With the advent of the Fourth Industrial Revolution, the demand for software employment has increased. It is necessary to analyze the factors for software employment of university graduates. This paper proposes explainable software employment model of university graduates using machine learning and explainable AI. The Graduates Occupational Mobility Survey(GOMS) provided by the Korea Employment Information Service is used. The employment model uses boosting machine learning. Then, performance evaluation is performed with four algorithms of boosting model. Moreover, it explains the factors affecting the employment using SHAP. The results indicates that the top 3 factors are major, employment goal setting semester, and vocational education and training.

Explainable AI Application for Machine Predictive Maintenance (설명 가능한 AI를 적용한 기계 예지 정비 방법)

  • Cheon, Kang Min;Yang, Jaekyung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.4
    • /
    • pp.227-233
    • /
    • 2021
  • Predictive maintenance has been one of important applications of data science technology that creates a predictive model by collecting numerous data related to management targeted equipment. It does not predict equipment failure with just one or two signs, but quantifies and models numerous symptoms and historical data of actual failure. Statistical methods were used a lot in the past as this predictive maintenance method, but recently, many machine learning-based methods have been proposed. Such proposed machine learning-based methods are preferable in that they show more accurate prediction performance. However, with the exception of some learning models such as decision tree-based models, it is very difficult to explicitly know the structure of learning models (Black-Box Model) and to explain to what extent certain attributes (features or variables) of the learning model affected the prediction results. To overcome this problem, a recently proposed study is an explainable artificial intelligence (AI). It is a methodology that makes it easy for users to understand and trust the results of machine learning-based learning models. In this paper, we propose an explainable AI method to further enhance the explanatory power of the existing learning model by targeting the previously proposedpredictive model [5] that learned data from a core facility (Hyper Compressor) of a domestic chemical plant that produces polyethylene. The ensemble prediction model, which is a black box model, wasconverted to a white box model using the Explainable AI. The proposed methodology explains the direction of control for the major features in the failure prediction results through the Explainable AI. Through this methodology, it is possible to flexibly replace the timing of maintenance of the machine and supply and demand of parts, and to improve the efficiency of the facility operation through proper pre-control.

Explainable Fact Checking Model Based on Efficient Transformer (효율적인 트랜스포머에 기반한 설명 가능한 팩트체크 모델)

  • Yun, Heeseung;Jung, Jason J.;Lee, Gunju;Jung, Dahee;Kim, Kono
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.19-21
    • /
    • 2021
  • In this paper, we introduce the model so-called Explainable Fact-Checking model based on attention mechanism which shows both the result of fact check of the news and the evidence of verdict. Recently, several news surge on media, so fact check attracts much attentions. However, in present fact check relies on the search made by journalists and members of fact check orgainzation, so there is some researchs about automated fact checking. Therefore in this paper we propose explainable automated fact checking model.

  • PDF

Transforming Patient Health Management: Insights from Explainable AI and Network Science Integration

  • Mi-Hwa Song
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.307-313
    • /
    • 2024
  • This study explores the integration of Explainable Artificial Intelligence (XAI) and network science in healthcare, focusing on enhancing healthcare data interpretation and improving diagnostic and treatment methods. Key methodologies like Graph Neural Networks, Community Detection, Overlapping Network Models, and Time-Series Network Analysis are examined in depth for their potential in patient health management. The research highlights the transformative role of XAI in making complex AI models transparent and interpretable, essential for accurate, data-driven decision-making in healthcare. Case studies demonstrate the practical application of these methodologies in predicting diseases, understanding drug interactions, and tracking patient health over time. The study concludes with the immense promise of these advancements in healthcare, despite existing challenges, and underscores the need for ongoing research to fully realize the potential of AI in this field.

Classification of Breast Cancer using Explainable A.I. and Deep learning (딥러닝과 설명 가능한 인공지능을 이용한 유방암 판별)

  • Ha, Soo-Hee;Yoo, Jae-Chern
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.99-100
    • /
    • 2022
  • 본 논문에서는 유방암 초음파 이미지를 학습한 multi-modal 구조를 이용하여 유방암을 판별하는 인공지능을 제안한다. 학습된 인공지능은 유방암을 판별과 동시에, 설명 가능한 인공지능 기법과 ROI를 함께 사용하여 종양의 위치를 나타내준다. 시각적으로 판단 근거를 제시하기 때문에 인공지능의 판단 신뢰도는 더 높아진다.

  • PDF

Speed Prediction and Analysis of Nearby Road Causality Using Explainable Deep Graph Neural Network (설명 가능 그래프 심층 인공신경망 기반 속도 예측 및 인근 도로 영향력 분석 기법)

  • Kim, Yoo Jin;Yoon, Young
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.1
    • /
    • pp.51-62
    • /
    • 2022
  • AI-based speed prediction studies have been conducted quite actively. However, while the importance of explainable AI is emerging, the study of interpreting and reasoning the AI-based speed predictions has not been carried out much. Therefore, in this paper, 'Explainable Deep Graph Neural Network (GNN)' is devised to analyze the speed prediction and assess the nearby road influence for reasoning the critical contributions to a given road situation. The model's output was explained by comparing the differences in output before and after masking the input values of the GNN model. Using TOPIS traffic speed data, we applied our GNN models for the major congested roads in Seoul. We verified our approach through a traffic flow simulation by adjusting the most influential nearby roads' speed and observing the congestion's relief on the road of interest accordingly. This is meaningful in that our approach can be applied to the transportation network and traffic flow can be improved by controlling specific nearby roads based on the inference results.

The Enhancement of intrusion detection reliability using Explainable Artificial Intelligence(XAI) (설명 가능한 인공지능(XAI)을 활용한 침입탐지 신뢰성 강화 방안)

  • Jung Il Ok;Choi Woo Bin;Kim Su Chul
    • Convergence Security Journal
    • /
    • v.22 no.3
    • /
    • pp.101-110
    • /
    • 2022
  • As the cases of using artificial intelligence in various fields increase, attempts to solve various issues through artificial intelligence in the intrusion detection field are also increasing. However, the black box basis, which cannot explain or trace the reasons for the predicted results through machine learning, presents difficulties for security professionals who must use it. To solve this problem, research on explainable AI(XAI), which helps interpret and understand decisions in machine learning, is increasing in various fields. Therefore, in this paper, we propose an explanatory AI to enhance the reliability of machine learning-based intrusion detection prediction results. First, the intrusion detection model is implemented through XGBoost, and the description of the model is implemented using SHAP. And it provides reliability for security experts to make decisions by comparing and analyzing the existing feature importance and the results using SHAP. For this experiment, PKDD2007 dataset was used, and the association between existing feature importance and SHAP Value was analyzed, and it was verified that SHAP-based explainable AI was valid to give security experts the reliability of the prediction results of intrusion detection models.

Damage Detection and Damage Quantification of Temporary works Equipment based on Explainable Artificial Intelligence (XAI)

  • Cheolhee Lee;Taehoe Koo;Namwook Park;Nakhoon Lim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.11-19
    • /
    • 2024
  • This paper was studied abouta technology for detecting damage to temporary works equipment used in construction sites with explainable artificial intelligence (XAI). Temporary works equipment is mostly composed of steel or aluminum, and it is reused several times due to the characters of the materials in temporary works equipment. However, it sometimes causes accidents at construction sites by using low or decreased quality of temporary works equipment because the regulation and restriction of reuse in them is not strict. Currently, safety rules such as related government laws, standards, and regulations for quality control of temporary works equipment have not been established. Additionally, the inspection results were often different according to the inspector's level of training. To overcome these limitations, a method based with AI and image processing technology was developed. In addition, it was devised by applying explainableartificial intelligence (XAI) technology so that the inspector makes more exact decision with resultsin damage detect with image analysis by the XAI which is a developed AI model for analysis of temporary works equipment. In the experiments, temporary works equipment was photographed with a 4k-quality camera, and the learned artificial intelligence model was trained with 610 labelingdata, and the accuracy was tested by analyzing the image recording data of temporary works equipment. As a result, the accuracy of damage detect by the XAI was 95.0% for the training dataset, 92.0% for the validation dataset, and 90.0% for the test dataset. This was shown aboutthe reliability of the performance of the developed artificial intelligence. It was verified for usability of explainable artificial intelligence to detect damage in temporary works equipment by the experiments. However, to improve the level of commercial software, the XAI need to be trained more by real data set and the ability to detect damage has to be kept or increased when the real data set is applied.