• 제목/요약/키워드: explainable artificial intelligence

검색결과 59건 처리시간 0.026초

설명가능한 인공지능기반의 인공지능 교육 프로그램 개발 (A Study to Design the Instructional Program based on Explainable Artificial intelligence)

  • 박다빈;신승기
    • 한국정보교육학회:학술대회논문집
    • /
    • 한국정보교육학회 2021년도 학술논문집
    • /
    • pp.149-157
    • /
    • 2021
  • 2022년 개정 교육과정에 인공지능 교육 도입을 앞두고 인공지능을 학습 소재로 한 다양한 수업들이 개발되어야 하는 시점이다. 본 연구에서는 설계기반연구를 활용하여 설명가능한 인공지능을 기반한 인공지능 교육 프로그램을 개발하였다. 인공지능의 기초, 활용, 윤리 세 분야를 골고루 포괄하며 실생활 사례와도 쉽게 연결시킬 수 있는 설명가능한 인공지능을 핵심 주제로 설정하였다. 일반적인 설계기반연구(Design-based research, DBR)에서는 3차 이상의 반복적인 과정이 이루어지지만 본 연구 결과는 1차 설계, 적용 및 평가에 대한 결과를 바탕으로 연구가 진행되었다. 추후 학교 현장에 적용하여 3차 수정 및 보완을 바탕으로 더욱 완성된 설명가능한 인공지능을 주제로 한 프로그램을 개발하고자 한다. 본 연구가 학교 현장에 도입되는 인공지능 교육의 발전에 도움이 되기를 기대한다.

  • PDF

Damage Detection and Damage Quantification of Temporary works Equipment based on Explainable Artificial Intelligence (XAI)

  • Cheolhee Lee;Taehoe Koo;Namwook Park;Nakhoon Lim
    • 인터넷정보학회논문지
    • /
    • 제25권2호
    • /
    • pp.11-19
    • /
    • 2024
  • This paper was studied abouta technology for detecting damage to temporary works equipment used in construction sites with explainable artificial intelligence (XAI). Temporary works equipment is mostly composed of steel or aluminum, and it is reused several times due to the characters of the materials in temporary works equipment. However, it sometimes causes accidents at construction sites by using low or decreased quality of temporary works equipment because the regulation and restriction of reuse in them is not strict. Currently, safety rules such as related government laws, standards, and regulations for quality control of temporary works equipment have not been established. Additionally, the inspection results were often different according to the inspector's level of training. To overcome these limitations, a method based with AI and image processing technology was developed. In addition, it was devised by applying explainableartificial intelligence (XAI) technology so that the inspector makes more exact decision with resultsin damage detect with image analysis by the XAI which is a developed AI model for analysis of temporary works equipment. In the experiments, temporary works equipment was photographed with a 4k-quality camera, and the learned artificial intelligence model was trained with 610 labelingdata, and the accuracy was tested by analyzing the image recording data of temporary works equipment. As a result, the accuracy of damage detect by the XAI was 95.0% for the training dataset, 92.0% for the validation dataset, and 90.0% for the test dataset. This was shown aboutthe reliability of the performance of the developed artificial intelligence. It was verified for usability of explainable artificial intelligence to detect damage in temporary works equipment by the experiments. However, to improve the level of commercial software, the XAI need to be trained more by real data set and the ability to detect damage has to be kept or increased when the real data set is applied.

포인트 클라우드를 이용한 블록체인 기반 설명 가능한 인공지능 연구 (Explanable Artificial Intelligence Study based on Blockchain Using Point Cloud)

  • 홍성혁
    • 융합정보논문지
    • /
    • 제11권8호
    • /
    • pp.36-41
    • /
    • 2021
  • 인공지능을 이용하여 예측이나 분석하는 기술은 지속적으로 발전하고 있지만, 의사결정 과정을 명확히 해석하지 못하는 블랙박스 문제가 존재한다. 따라서 인공지능 모델의 의사결정 과정에서 사용자의 입장에서 해석이 불가능하여 결과를 신뢰할 수 없는 문제가 발생한다. 본 연구에서는 인공지능의 문제점과 이를 해결하기 위한 블록체인을 활용한 설명 가능한 인공지능에 대해 연구를 진행하였다. 블록체인을 이용해서 설명 가능한 인공지능 모델의 의사결정 과정에서의 데이터를 타임스탬프 등을 이용하여 부분별로 블록체인에 저장한다. 블록체인을 이용하여 저장된 데이터의 위변조 방지를 제공하고 블록체인의 특성상 사용자는 블록에 저장된 의사결정 과정등의 데이터를 자유롭게 접근할 수 있다. 설명 가능한 인공지능 모델의 구축이 힘든 것은 기존 모델의 복잡성이 큰 부분을 차지한다. 따라서 포인트 클라우드를 활용해서 3차원 데이터 처리와 가공과정의 효율성을 높여서 의사결정 과정을 단축해 설명 가능한 인공지능 모델의 구축을 원활하게 한다. 블록체인에 데이터 저장과정에서 데이터 위변조가 발생할 수 있는 오라클 문제를 해결하기 위해 저장과정에 중간자를 거치는 블록체인 기반의 설명 가능한 인공지능 모델을 제안하여 인공지능의 블랙박스 문제를 해결하였다.

소셜 네트워크 분석과 토픽 모델링을 활용한 설명 가능 인공지능 연구 동향 분석 (XAI Research Trends Using Social Network Analysis and Topic Modeling)

  • 문건두;김경재
    • Journal of Information Technology Applications and Management
    • /
    • 제30권1호
    • /
    • pp.53-70
    • /
    • 2023
  • Artificial intelligence has become familiar with modern society, not the distant future. As artificial intelligence and machine learning developed more highly and became more complicated, it became difficult for people to grasp its structure and the basis for decision-making. It is because machine learning only shows results, not the whole processes. As artificial intelligence developed and became more common, people wanted the explanation which could provide them the trust on artificial intelligence. This study recognized the necessity and importance of explainable artificial intelligence, XAI, and examined the trends of XAI research by analyzing social networks and analyzing topics with IEEE published from 2004, when the concept of artificial intelligence was defined, to 2022. Through social network analysis, the overall pattern of nodes can be found in a large number of documents and the connection between keywords shows the meaning of the relationship structure, and topic modeling can identify more objective topics by extracting keywords from unstructured data and setting topics. Both analysis methods are suitable for trend analysis. As a result of the analysis, it was found that XAI's application is gradually expanding in various fields as well as machine learning and deep learning.

A reliable intelligent diagnostic assistant for nuclear power plants using explainable artificial intelligence of GRU-AE, LightGBM and SHAP

  • Park, Ji Hun;Jo, Hye Seon;Lee, Sang Hyun;Oh, Sang Won;Na, Man Gyun
    • Nuclear Engineering and Technology
    • /
    • 제54권4호
    • /
    • pp.1271-1287
    • /
    • 2022
  • When abnormal operating conditions occur in nuclear power plants, operators must identify the occurrence cause and implement the necessary mitigation measures. Accordingly, the operator must rapidly and accurately analyze the symptom requirements of more than 200 abnormal scenarios from the trends of many variables to perform diagnostic tasks and implement mitigation actions rapidly. However, the probability of human error increases owing to the characteristics of the diagnostic tasks performed by the operator. Researches regarding diagnostic tasks based on Artificial Intelligence (AI) have been conducted recently to reduce the likelihood of human errors; however, reliability issues due to the black box characteristics of AI have been pointed out. Hence, the application of eXplainable Artificial Intelligence (XAI), which can provide AI diagnostic evidence for operators, is considered. In conclusion, the XAI to solve the reliability problem of AI is included in the AI-based diagnostic algorithm. A reliable intelligent diagnostic assistant based on a merged diagnostic algorithm, in the form of an operator support system, is developed, and includes an interface to efficiently inform operators.

설명가능한 인공지능을 통한 마르텐사이트 변태 온도 예측 모델 및 거동 분석 연구 (Study on predictive model and mechanism analysis for martensite transformation temperatures through explainable artificial intelligence)

  • 전준협;손승배;정재길;이석재
    • 열처리공학회지
    • /
    • 제37권3호
    • /
    • pp.103-113
    • /
    • 2024
  • Martensite volume fraction significantly affects the mechanical properties of alloy steels. Martensite start temperature (Ms), transformation temperature for martensite 50 vol.% (M50), and transformation temperature for martensite 90 vol.% (M90) are important transformation temperatures to control the martensite phase fraction. Several researchers proposed empirical equations and machine learning models to predict the Ms temperature. These numerical approaches can easily predict the Ms temperature without additional experiment and cost. However, to control martensite phase fraction more precisely, we need to reduce prediction error of the Ms model and propose prediction models for other martensite transformation temperatures (M50, M90). In the present study, machine learning model was applied to suggest the predictive model for the Ms, M50, M90 temperatures. To explain prediction mechanisms and suggest feature importance on martensite transformation temperature of machine learning models, the explainable artificial intelligence (XAI) is employed. Random forest regression (RFR) showed the best performance for predicting the Ms, M50, M90 temperatures using different machine learning models. The feature importance was proposed and the prediction mechanisms were discussed by XAI.

A Study on Explainable Artificial Intelligence-based Sentimental Analysis System Model

  • Song, Mi-Hwa
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제14권1호
    • /
    • pp.142-151
    • /
    • 2022
  • In this paper, a model combined with explanatory artificial intelligence (xAI) models was presented to secure the reliability of machine learning-based sentiment analysis and prediction. The applicability of the proposed model was tested and described using the IMDB dataset. This approach has an advantage in that it can explain how the data affects the prediction results of the model from various perspectives. In various applications of sentiment analysis such as recommendation system, emotion analysis through facial expression recognition, and opinion analysis, it is possible to gain trust from users of the system by presenting more specific and evidence-based analysis results to users.

Bankruptcy Prediction with Explainable Artificial Intelligence for Early-Stage Business Models

  • Tuguldur Enkhtuya;Dae-Ki Kang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제15권3호
    • /
    • pp.58-65
    • /
    • 2023
  • Bankruptcy is a significant risk for start-up companies, but with the help of cutting-edge artificial intelligence technology, we can now predict bankruptcy with detailed explanations. In this paper, we implemented the Category Boosting algorithm following data cleaning and editing using OpenRefine. We further explained our model using the Shapash library, incorporating domain knowledge. By leveraging the 5C's credit domain knowledge, financial analysts in banks or investors can utilize the detailed results provided by our model to enhance their decision-making processes, even without extensive knowledge about AI. This empowers investors to identify potential bankruptcy risks in their business models, enabling them to make necessary improvements or reconsider their ventures before proceeding. As a result, our model serves as a "glass-box" model, allowing end-users to understand which specific financial indicators contribute to the prediction of bankruptcy. This transparency enhances trust and provides valuable insights for decision-makers in mitigating bankruptcy risks.

설명 가능한 인공지능(XAI)을 활용한 침입탐지 신뢰성 강화 방안 (The Enhancement of intrusion detection reliability using Explainable Artificial Intelligence(XAI))

  • 정일옥;최우빈;김수철
    • 융합보안논문지
    • /
    • 제22권3호
    • /
    • pp.101-110
    • /
    • 2022
  • 다양한 분야에서 인공지능을 활용한 사례가 증가하면서 침입탐지 분야 또한 다양한 이슈를 인공지능을 통해 해결하려는 시도가 증가하고 있다. 하지만, 머신러닝을 통한 예측된 결과에 관한 이유를 설명하거나 추적할 수 없는 블랙박스 기반이 대부분으로 이를 활용해야 하는 보안 전문가에게 어려움을 주고 있다. 이러한 문제를 해결하고자 다양한 분야에서 머신러닝의 결정을 해석하고 이해하는데 도움이 되는 설명 가능한 AI(XAI)에 대한 연구가 증가하고 있다. 이에 본 논문에서는 머신러닝 기반의 침입탐지 예측 결과에 대한 신뢰성을 강화하기 위한 설명 가능한 AI를 제안한다. 먼저, XGBoost를 통해 침입탐지 모델을 구현하고, SHAP을 활용하여 모델에 대한 설명을 구현한다. 그리고 기존의 피처 중요도와 SHAP을 활용한 결과를 비교 분석하여 보안 전문가가 결정을 수행하는데 신뢰성을 제공한다. 본 실험을 위해 PKDD2007 데이터셋을 사용하였으며 기존의 피처 중요도와 SHAP Value에 대한 연관성을 분석하였으며, 이를 통해 SHAP 기반의 설명 가능한 AI가 보안 전문가들에게 침입탐지 모델의 예측 결과에 대한 신뢰성을 주는데 타당함을 검증하였다.

A Gradient-Based Explanation Method for Node Classification Using Graph Convolutional Networks

  • Chaehyeon Kim;Hyewon Ryu;Ki Yong Lee
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.803-816
    • /
    • 2023
  • Explainable artificial intelligence is a method that explains how a complex model (e.g., a deep neural network) yields its output from a given input. Recently, graph-type data have been widely used in various fields, and diverse graph neural networks (GNNs) have been developed for graph-type data. However, methods to explain the behavior of GNNs have not been studied much, and only a limited understanding of GNNs is currently available. Therefore, in this paper, we propose an explanation method for node classification using graph convolutional networks (GCNs), which is a representative type of GNN. The proposed method finds out which features of each node have the greatest influence on the classification of that node using GCN. The proposed method identifies influential features by backtracking the layers of the GCN from the output layer to the input layer using the gradients. The experimental results on both synthetic and real datasets demonstrate that the proposed explanation method accurately identifies the features of each node that have the greatest influence on its classification.