• Title/Summary/Keyword: explainable artificial intelligence

Search Result 50, Processing Time 0.035 seconds

A Study to Design the Instructional Program based on Explainable Artificial intelligence (설명가능한 인공지능기반의 인공지능 교육 프로그램 개발)

  • Park, Dabin;Shin, Seungki
    • 한국정보교육학회:학술대회논문집
    • /
    • 2021.08a
    • /
    • pp.149-157
    • /
    • 2021
  • Ahead of the introduction of artificial intelligence education into the revised curriculum in 2022, various class cases based on artificial intelligence should be developed. In this study, we designed an artificial intelligence education program based on explainable artificial intelligence using design-based research. Artificial intelligence, which covers three areas of basic, utilization, and ethics of artificial intelligence and can be easily connected to real-life cases, is set as a key topic. In general design-based studies, more than three repetitive processes are performed, but the results of this study are based on the results of the primary design, application, and evaluation. We plan to design a program on artificial intelligence that is more complete based on the third modification and supplementation by applying it to the school later. This research will help the development of artificial intelligence education introduced at school.

  • PDF

Explanable Artificial Intelligence Study based on Blockchain Using Point Cloud (포인트 클라우드를 이용한 블록체인 기반 설명 가능한 인공지능 연구)

  • Hong, Sunghyuck
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.8
    • /
    • pp.36-41
    • /
    • 2021
  • Although the technology for prediction or analysis using artificial intelligence is constantly developing, a black-box problem does not interpret the decision-making process. Therefore, the decision process of the AI model can not be interpreted from the user's point of view, which leads to unreliable results. We investigated the problems of artificial intelligence and explainable artificial intelligence using Blockchain to solve them. Data from the decision-making process of artificial intelligence models, which can be explained with Blockchain, are stored in Blockchain with time stamps, among other things. Blockchain provides anti-counterfeiting of the stored data, and due to the nature of Blockchain, it allows free access to data such as decision processes stored in blocks. The difficulty of creating explainable artificial intelligence models is a large part of the complexity of existing models. Therefore, using the point cloud to increase the efficiency of 3D data processing and the processing procedures will shorten the decision-making process to facilitate an explainable artificial intelligence model. To solve the oracle problem, which may lead to data falsification or corruption when storing data in the Blockchain, a blockchain artificial intelligence problem was solved by proposing a blockchain-based explainable artificial intelligence model that passes through an intermediary in the storage process.

XAI Research Trends Using Social Network Analysis and Topic Modeling (소셜 네트워크 분석과 토픽 모델링을 활용한 설명 가능 인공지능 연구 동향 분석)

  • Gun-doo Moon;Kyoung-jae Kim
    • Journal of Information Technology Applications and Management
    • /
    • v.30 no.1
    • /
    • pp.53-70
    • /
    • 2023
  • Artificial intelligence has become familiar with modern society, not the distant future. As artificial intelligence and machine learning developed more highly and became more complicated, it became difficult for people to grasp its structure and the basis for decision-making. It is because machine learning only shows results, not the whole processes. As artificial intelligence developed and became more common, people wanted the explanation which could provide them the trust on artificial intelligence. This study recognized the necessity and importance of explainable artificial intelligence, XAI, and examined the trends of XAI research by analyzing social networks and analyzing topics with IEEE published from 2004, when the concept of artificial intelligence was defined, to 2022. Through social network analysis, the overall pattern of nodes can be found in a large number of documents and the connection between keywords shows the meaning of the relationship structure, and topic modeling can identify more objective topics by extracting keywords from unstructured data and setting topics. Both analysis methods are suitable for trend analysis. As a result of the analysis, it was found that XAI's application is gradually expanding in various fields as well as machine learning and deep learning.

A reliable intelligent diagnostic assistant for nuclear power plants using explainable artificial intelligence of GRU-AE, LightGBM and SHAP

  • Park, Ji Hun;Jo, Hye Seon;Lee, Sang Hyun;Oh, Sang Won;Na, Man Gyun
    • Nuclear Engineering and Technology
    • /
    • v.54 no.4
    • /
    • pp.1271-1287
    • /
    • 2022
  • When abnormal operating conditions occur in nuclear power plants, operators must identify the occurrence cause and implement the necessary mitigation measures. Accordingly, the operator must rapidly and accurately analyze the symptom requirements of more than 200 abnormal scenarios from the trends of many variables to perform diagnostic tasks and implement mitigation actions rapidly. However, the probability of human error increases owing to the characteristics of the diagnostic tasks performed by the operator. Researches regarding diagnostic tasks based on Artificial Intelligence (AI) have been conducted recently to reduce the likelihood of human errors; however, reliability issues due to the black box characteristics of AI have been pointed out. Hence, the application of eXplainable Artificial Intelligence (XAI), which can provide AI diagnostic evidence for operators, is considered. In conclusion, the XAI to solve the reliability problem of AI is included in the AI-based diagnostic algorithm. A reliable intelligent diagnostic assistant based on a merged diagnostic algorithm, in the form of an operator support system, is developed, and includes an interface to efficiently inform operators.

A Study on Explainable Artificial Intelligence-based Sentimental Analysis System Model

  • Song, Mi-Hwa
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.142-151
    • /
    • 2022
  • In this paper, a model combined with explanatory artificial intelligence (xAI) models was presented to secure the reliability of machine learning-based sentiment analysis and prediction. The applicability of the proposed model was tested and described using the IMDB dataset. This approach has an advantage in that it can explain how the data affects the prediction results of the model from various perspectives. In various applications of sentiment analysis such as recommendation system, emotion analysis through facial expression recognition, and opinion analysis, it is possible to gain trust from users of the system by presenting more specific and evidence-based analysis results to users.

Bankruptcy Prediction with Explainable Artificial Intelligence for Early-Stage Business Models

  • Tuguldur Enkhtuya;Dae-Ki Kang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.58-65
    • /
    • 2023
  • Bankruptcy is a significant risk for start-up companies, but with the help of cutting-edge artificial intelligence technology, we can now predict bankruptcy with detailed explanations. In this paper, we implemented the Category Boosting algorithm following data cleaning and editing using OpenRefine. We further explained our model using the Shapash library, incorporating domain knowledge. By leveraging the 5C's credit domain knowledge, financial analysts in banks or investors can utilize the detailed results provided by our model to enhance their decision-making processes, even without extensive knowledge about AI. This empowers investors to identify potential bankruptcy risks in their business models, enabling them to make necessary improvements or reconsider their ventures before proceeding. As a result, our model serves as a "glass-box" model, allowing end-users to understand which specific financial indicators contribute to the prediction of bankruptcy. This transparency enhances trust and provides valuable insights for decision-makers in mitigating bankruptcy risks.

The Enhancement of intrusion detection reliability using Explainable Artificial Intelligence(XAI) (설명 가능한 인공지능(XAI)을 활용한 침입탐지 신뢰성 강화 방안)

  • Jung Il Ok;Choi Woo Bin;Kim Su Chul
    • Convergence Security Journal
    • /
    • v.22 no.3
    • /
    • pp.101-110
    • /
    • 2022
  • As the cases of using artificial intelligence in various fields increase, attempts to solve various issues through artificial intelligence in the intrusion detection field are also increasing. However, the black box basis, which cannot explain or trace the reasons for the predicted results through machine learning, presents difficulties for security professionals who must use it. To solve this problem, research on explainable AI(XAI), which helps interpret and understand decisions in machine learning, is increasing in various fields. Therefore, in this paper, we propose an explanatory AI to enhance the reliability of machine learning-based intrusion detection prediction results. First, the intrusion detection model is implemented through XGBoost, and the description of the model is implemented using SHAP. And it provides reliability for security experts to make decisions by comparing and analyzing the existing feature importance and the results using SHAP. For this experiment, PKDD2007 dataset was used, and the association between existing feature importance and SHAP Value was analyzed, and it was verified that SHAP-based explainable AI was valid to give security experts the reliability of the prediction results of intrusion detection models.

A Gradient-Based Explanation Method for Node Classification Using Graph Convolutional Networks

  • Chaehyeon Kim;Hyewon Ryu;Ki Yong Lee
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.803-816
    • /
    • 2023
  • Explainable artificial intelligence is a method that explains how a complex model (e.g., a deep neural network) yields its output from a given input. Recently, graph-type data have been widely used in various fields, and diverse graph neural networks (GNNs) have been developed for graph-type data. However, methods to explain the behavior of GNNs have not been studied much, and only a limited understanding of GNNs is currently available. Therefore, in this paper, we propose an explanation method for node classification using graph convolutional networks (GCNs), which is a representative type of GNN. The proposed method finds out which features of each node have the greatest influence on the classification of that node using GCN. The proposed method identifies influential features by backtracking the layers of the GCN from the output layer to the input layer using the gradients. The experimental results on both synthetic and real datasets demonstrate that the proposed explanation method accurately identifies the features of each node that have the greatest influence on its classification.

A review of Explainable AI Techniques in Medical Imaging (의료영상 분야를 위한 설명가능한 인공지능 기술 리뷰)

  • Lee, DongEon;Park, ChunSu;Kang, Jeong-Woon;Kim, MinWoo
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.259-270
    • /
    • 2022
  • Artificial intelligence (AI) has been studied in various fields of medical imaging. Currently, top-notch deep learning (DL) techniques have led to high diagnostic accuracy and fast computation. However, they are rarely used in real clinical practices because of a lack of reliability concerning their results. Most DL models can achieve high performance by extracting features from large volumes of data. However, increasing model complexity and nonlinearity turn such models into black boxes that are seldom accessible, interpretable, and transparent. As a result, scientific interest in the field of explainable artificial intelligence (XAI) is gradually emerging. This study aims to review diverse XAI approaches currently exploited in medical imaging. We identify the concepts of the methods, introduce studies applying them to imaging modalities such as computational tomography (CT), magnetic resonance imaging (MRI), and endoscopy, and lastly discuss limitations and challenges faced by XAI for future studies.

A Study on Human-AI Collaboration Process to Support Evidence-Based National Innovation Monitoring: Case Study on Ministry of Oceans and Fisheries (Human-AI 협력 프로세스 기반의 증거기반 국가혁신 모니터링 연구: 해양수산부 사례)

  • Jung Sun Lim;Seoung Hun Bae;Kil-Ho Ryu;Sang-Gook Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.2
    • /
    • pp.22-31
    • /
    • 2023
  • Governments around the world are enacting laws mandating explainable traceability when using AI(Artificial Intelligence) to solve real-world problems. HAI(Human-Centric Artificial Intelligence) is an approach that induces human decision-making through Human-AI collaboration. This research presents a case study that implements the Human-AI collaboration to achieve explainable traceability in governmental data analysis. The Human-AI collaboration explored in this study performs AI inferences for generating labels, followed by AI interpretation to make results more explainable and traceable. The study utilized an example dataset from the Ministry of Oceans and Fisheries to reproduce the Human-AI collaboration process used in actual policy-making, in which the Ministry of Science and ICT utilized R&D PIE(R&D Platform for Investment and Evaluation) to build a government investment portfolio.