• 제목/요약/키워드: Explainable artificial intelligence (XAI)

검색결과 38건 처리시간 0.019초

Damage Detection and Damage Quantification of Temporary works Equipment based on Explainable Artificial Intelligence (XAI)

  • Cheolhee Lee;Taehoe Koo;Namwook Park;Nakhoon Lim
    • 인터넷정보학회논문지
    • /
    • 제25권2호
    • /
    • pp.11-19
    • /
    • 2024
  • This paper was studied abouta technology for detecting damage to temporary works equipment used in construction sites with explainable artificial intelligence (XAI). Temporary works equipment is mostly composed of steel or aluminum, and it is reused several times due to the characters of the materials in temporary works equipment. However, it sometimes causes accidents at construction sites by using low or decreased quality of temporary works equipment because the regulation and restriction of reuse in them is not strict. Currently, safety rules such as related government laws, standards, and regulations for quality control of temporary works equipment have not been established. Additionally, the inspection results were often different according to the inspector's level of training. To overcome these limitations, a method based with AI and image processing technology was developed. In addition, it was devised by applying explainableartificial intelligence (XAI) technology so that the inspector makes more exact decision with resultsin damage detect with image analysis by the XAI which is a developed AI model for analysis of temporary works equipment. In the experiments, temporary works equipment was photographed with a 4k-quality camera, and the learned artificial intelligence model was trained with 610 labelingdata, and the accuracy was tested by analyzing the image recording data of temporary works equipment. As a result, the accuracy of damage detect by the XAI was 95.0% for the training dataset, 92.0% for the validation dataset, and 90.0% for the test dataset. This was shown aboutthe reliability of the performance of the developed artificial intelligence. It was verified for usability of explainable artificial intelligence to detect damage in temporary works equipment by the experiments. However, to improve the level of commercial software, the XAI need to be trained more by real data set and the ability to detect damage has to be kept or increased when the real data set is applied.

소셜 네트워크 분석과 토픽 모델링을 활용한 설명 가능 인공지능 연구 동향 분석 (XAI Research Trends Using Social Network Analysis and Topic Modeling)

  • 문건두;김경재
    • Journal of Information Technology Applications and Management
    • /
    • 제30권1호
    • /
    • pp.53-70
    • /
    • 2023
  • Artificial intelligence has become familiar with modern society, not the distant future. As artificial intelligence and machine learning developed more highly and became more complicated, it became difficult for people to grasp its structure and the basis for decision-making. It is because machine learning only shows results, not the whole processes. As artificial intelligence developed and became more common, people wanted the explanation which could provide them the trust on artificial intelligence. This study recognized the necessity and importance of explainable artificial intelligence, XAI, and examined the trends of XAI research by analyzing social networks and analyzing topics with IEEE published from 2004, when the concept of artificial intelligence was defined, to 2022. Through social network analysis, the overall pattern of nodes can be found in a large number of documents and the connection between keywords shows the meaning of the relationship structure, and topic modeling can identify more objective topics by extracting keywords from unstructured data and setting topics. Both analysis methods are suitable for trend analysis. As a result of the analysis, it was found that XAI's application is gradually expanding in various fields as well as machine learning and deep learning.

설명 가능한 인공지능(XAI)을 활용한 침입탐지 신뢰성 강화 방안 (The Enhancement of intrusion detection reliability using Explainable Artificial Intelligence(XAI))

  • 정일옥;최우빈;김수철
    • 융합보안논문지
    • /
    • 제22권3호
    • /
    • pp.101-110
    • /
    • 2022
  • 다양한 분야에서 인공지능을 활용한 사례가 증가하면서 침입탐지 분야 또한 다양한 이슈를 인공지능을 통해 해결하려는 시도가 증가하고 있다. 하지만, 머신러닝을 통한 예측된 결과에 관한 이유를 설명하거나 추적할 수 없는 블랙박스 기반이 대부분으로 이를 활용해야 하는 보안 전문가에게 어려움을 주고 있다. 이러한 문제를 해결하고자 다양한 분야에서 머신러닝의 결정을 해석하고 이해하는데 도움이 되는 설명 가능한 AI(XAI)에 대한 연구가 증가하고 있다. 이에 본 논문에서는 머신러닝 기반의 침입탐지 예측 결과에 대한 신뢰성을 강화하기 위한 설명 가능한 AI를 제안한다. 먼저, XGBoost를 통해 침입탐지 모델을 구현하고, SHAP을 활용하여 모델에 대한 설명을 구현한다. 그리고 기존의 피처 중요도와 SHAP을 활용한 결과를 비교 분석하여 보안 전문가가 결정을 수행하는데 신뢰성을 제공한다. 본 실험을 위해 PKDD2007 데이터셋을 사용하였으며 기존의 피처 중요도와 SHAP Value에 대한 연관성을 분석하였으며, 이를 통해 SHAP 기반의 설명 가능한 AI가 보안 전문가들에게 침입탐지 모델의 예측 결과에 대한 신뢰성을 주는데 타당함을 검증하였다.

Review of medical imaging systems, medical imaging data problems, and XAI in the medical imaging field

  • Sun-Kuk Noh
    • 인터넷정보학회논문지
    • /
    • 제25권5호
    • /
    • pp.53-65
    • /
    • 2024
  • Currently, artificial intelligence (AI) is being applied in the medical field to collect and analyze data such as personal genetic information, medical information, and lifestyle information. In particular, in the medical imaging field, AI is being applied to the medical imaging field to analyze patients' medical image data and diagnose diseases. Deep learning (DL) of deep neural networks such as CNN and GAN have been introduced to medical image analysis and medical data augmentation to facilitate lesion detection, quantification, and classification. In this paper, we examine AI used in the medical imaging field and review related medical image data acquisition devices, medical information systems for transmitting medical image data, problems with medical image data, and the current status of explainable artificial intelligence (XAI) that has been actively applied recently. In the future, the continuous development of AI and information and communication technology (ICT) is expected to make it easier to analyze medical image data in the medical field, enabling disease diagnosis, prognosis prediction, and improvement of patients' quality of life. In the future, AI medicine is expected to evolve from the existing treatment-centered medical system to personalized healthcare through preemptive diagnosis and prevention.

의료영상 분야를 위한 설명가능한 인공지능 기술 리뷰 (A review of Explainable AI Techniques in Medical Imaging)

  • 이동언;박춘수;강정운;김민우
    • 대한의용생체공학회:의공학회지
    • /
    • 제43권4호
    • /
    • pp.259-270
    • /
    • 2022
  • Artificial intelligence (AI) has been studied in various fields of medical imaging. Currently, top-notch deep learning (DL) techniques have led to high diagnostic accuracy and fast computation. However, they are rarely used in real clinical practices because of a lack of reliability concerning their results. Most DL models can achieve high performance by extracting features from large volumes of data. However, increasing model complexity and nonlinearity turn such models into black boxes that are seldom accessible, interpretable, and transparent. As a result, scientific interest in the field of explainable artificial intelligence (XAI) is gradually emerging. This study aims to review diverse XAI approaches currently exploited in medical imaging. We identify the concepts of the methods, introduce studies applying them to imaging modalities such as computational tomography (CT), magnetic resonance imaging (MRI), and endoscopy, and lastly discuss limitations and challenges faced by XAI for future studies.

A reliable intelligent diagnostic assistant for nuclear power plants using explainable artificial intelligence of GRU-AE, LightGBM and SHAP

  • Park, Ji Hun;Jo, Hye Seon;Lee, Sang Hyun;Oh, Sang Won;Na, Man Gyun
    • Nuclear Engineering and Technology
    • /
    • 제54권4호
    • /
    • pp.1271-1287
    • /
    • 2022
  • When abnormal operating conditions occur in nuclear power plants, operators must identify the occurrence cause and implement the necessary mitigation measures. Accordingly, the operator must rapidly and accurately analyze the symptom requirements of more than 200 abnormal scenarios from the trends of many variables to perform diagnostic tasks and implement mitigation actions rapidly. However, the probability of human error increases owing to the characteristics of the diagnostic tasks performed by the operator. Researches regarding diagnostic tasks based on Artificial Intelligence (AI) have been conducted recently to reduce the likelihood of human errors; however, reliability issues due to the black box characteristics of AI have been pointed out. Hence, the application of eXplainable Artificial Intelligence (XAI), which can provide AI diagnostic evidence for operators, is considered. In conclusion, the XAI to solve the reliability problem of AI is included in the AI-based diagnostic algorithm. A reliable intelligent diagnostic assistant based on a merged diagnostic algorithm, in the form of an operator support system, is developed, and includes an interface to efficiently inform operators.

설명가능한 인공지능을 통한 마르텐사이트 변태 온도 예측 모델 및 거동 분석 연구 (Study on predictive model and mechanism analysis for martensite transformation temperatures through explainable artificial intelligence)

  • 전준협;손승배;정재길;이석재
    • 열처리공학회지
    • /
    • 제37권3호
    • /
    • pp.103-113
    • /
    • 2024
  • Martensite volume fraction significantly affects the mechanical properties of alloy steels. Martensite start temperature (Ms), transformation temperature for martensite 50 vol.% (M50), and transformation temperature for martensite 90 vol.% (M90) are important transformation temperatures to control the martensite phase fraction. Several researchers proposed empirical equations and machine learning models to predict the Ms temperature. These numerical approaches can easily predict the Ms temperature without additional experiment and cost. However, to control martensite phase fraction more precisely, we need to reduce prediction error of the Ms model and propose prediction models for other martensite transformation temperatures (M50, M90). In the present study, machine learning model was applied to suggest the predictive model for the Ms, M50, M90 temperatures. To explain prediction mechanisms and suggest feature importance on martensite transformation temperature of machine learning models, the explainable artificial intelligence (XAI) is employed. Random forest regression (RFR) showed the best performance for predicting the Ms, M50, M90 temperatures using different machine learning models. The feature importance was proposed and the prediction mechanisms were discussed by XAI.

Transforming Patient Health Management: Insights from Explainable AI and Network Science Integration

  • Mi-Hwa Song
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제16권1호
    • /
    • pp.307-313
    • /
    • 2024
  • This study explores the integration of Explainable Artificial Intelligence (XAI) and network science in healthcare, focusing on enhancing healthcare data interpretation and improving diagnostic and treatment methods. Key methodologies like Graph Neural Networks, Community Detection, Overlapping Network Models, and Time-Series Network Analysis are examined in depth for their potential in patient health management. The research highlights the transformative role of XAI in making complex AI models transparent and interpretable, essential for accurate, data-driven decision-making in healthcare. Case studies demonstrate the practical application of these methodologies in predicting diseases, understanding drug interactions, and tracking patient health over time. The study concludes with the immense promise of these advancements in healthcare, despite existing challenges, and underscores the need for ongoing research to fully realize the potential of AI in this field.

화학 공정 설계 및 분석을 위한 설명 가능한 인공지능 대안 모델 (Explainable Artificial Intelligence (XAI) Surrogate Models for Chemical Process Design and Analysis)

  • 고유나;나종걸
    • Korean Chemical Engineering Research
    • /
    • 제61권4호
    • /
    • pp.542-549
    • /
    • 2023
  • 대안 모델링에 대한 관심이 커진 이후 데이터 기반의 기계학습을 이용하여 비선형 화학 공정을 모사하고자 하는 연구가 지속되고 있다. 그러나 기계 학습 모델의 black box 성질로 인하여 모델의 해석 가능성에 한계는 산업 적용에 걸림돌이 되고 있다. 따라서, 모델의 정확도가 보장된 상태에서 해석력을 부여하는 개념인 설명 가능한 인공지능(explainable artificial intelligence, XAI)을 이용하여 화학 공정 분석을 시도하고자 한다. 기존의 화학 공정 민감도 분석이 변수의 민감도 지수를 계산하고 순위를 매기는 데에 그쳤다면, XAI를 이용하여 전역적, 국소적 민감도 분석뿐만 아니라 변수들 간의 상호작용에 대하여 분석하여 데이터로부터 물리적 통찰을 얻어내는 방법론을 제안한다. 사례 연구의 대상공정인 암모니아 합성 공정에 대하여 첫번째 반응기로 향하는 흐름에 대한 예열기(preheater)의 온도, 세 반응기로 향하는 cold-shot의 분배 비율을 공정 변수로 설정하였다. Matlab과 Aspen plus를 연동하여 공정 변수를 바꿔가면서 암모니아의 생산량과 세 반응기의 최고 온도에 대한 데이터를 얻었으며, tree 기반의 모델들을 훈련시켰다. 그리고 성능이 좋은 모델에 대하여 XAI 기법 중 하나인 SHAP 기법을 이용하여 민감도 분석을 수행하였다. 전역적 민감도 분석 결과, 예열기의 온도가 가장 큰 영향을 미쳤으며 국소적 민감도 분석 결과에서 생산성 향상 및 과열 방지를 위한 공정 변수들의 범위를 규정할 수 있었다. 이처럼 화학 공정의 대안 모델을 구축하고 설명 가능한 인공지능을 이용해 민감도 분석을 진행하는 방법론을 통해 공정 최적화에 대한 정량적, 정성적 피드백을 제안하는 데 도움을 줄 것이다.

텍스트 기반 Explainable AI를 적용한 국가연구개발혁신 모니터링 (Text Based Explainable AI for Monitoring National Innovations)

  • 임정선;배성훈
    • 산업경영시스템학회지
    • /
    • 제45권4호
    • /
    • pp.1-7
    • /
    • 2022
  • Explainable AI (XAI) is an approach that leverages artificial intelligence to support human decision-making. Recently, governments of several countries including Korea are attempting objective evidence-based analyses of R&D investments with returns by analyzing quantitative data. Over the past decade, governments have invested in relevant researches, allowing government officials to gain insights to help them evaluate past performances and discuss future policy directions. Compared to the size that has not been used yet, the utilization of the text information (accumulated in national DBs) so far is low level. The current study utilizes a text mining strategy for monitoring innovations along with a case study of smart-farms in the Honam region.