• Title/Summary/Keyword: explainable AI

Search Result 59, Processing Time 0.02 seconds

Multi-dimensional Contextual Conditions-driven Mutually Exclusive Learning for Explainable AI in Decision-Making

  • Hyun Jung Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.7-21
    • /
    • 2024
  • There are various machine learning techniques such as Reinforcement Learning, Deep Learning, Neural Network Learning, and so on. In recent, Large Language Models (LLMs) are popularly used for Generative AI based on Reinforcement Learning. It makes decisions with the most optimal rewards through the fine tuning process in a particular situation. Unfortunately, LLMs can not provide any explanation for how they reach the goal because the training is based on learning of black-box AI. Reinforcement Learning as black-box AI is based on graph-evolving structure for deriving enhanced solution through adjustment by human feedback or reinforced data. In this research, for mutually exclusive decision-making, Mutually Exclusive Learning (MEL) is proposed to provide explanations of the chosen goals that are achieved by a decision on both ends with specified conditions. In MEL, decision-making process is based on the tree-based structure that can provide processes of pruning branches that are used as explanations of how to achieve the goals. The goal can be reached by trade-off among mutually exclusive alternatives according to the specific contextual conditions. Therefore, the tree-based structure is adopted to provide feasible solutions with the explanations based on the pruning branches. The sequence of pruning processes can be used to provide the explanations of the inferences and ways to reach the goals, as Explainable AI (XAI). The learning process is based on the pruning branches according to the multi-dimensional contextual conditions. To deep-dive the search, they are composed of time window to determine the temporal perspective, depth of phases for lookahead and decision criteria to prune branches. The goal depends on the policy of the pruning branches, which can be dynamically changed by configured situation with the specific multi-dimensional contextual conditions at a particular moment. The explanation is represented by the chosen episode among the decision alternatives according to configured situations. In this research, MEL adopts the tree-based learning model to provide explanation for the goal derived with specific conditions. Therefore, as an example of mutually exclusive problems, employment process is proposed to demonstrate the decision-making process of how to reach the goal and explanation by the pruning branches. Finally, further study is discussed to verify the effectiveness of MEL with experiments.

A reliable intelligent diagnostic assistant for nuclear power plants using explainable artificial intelligence of GRU-AE, LightGBM and SHAP

  • Park, Ji Hun;Jo, Hye Seon;Lee, Sang Hyun;Oh, Sang Won;Na, Man Gyun
    • Nuclear Engineering and Technology
    • /
    • v.54 no.4
    • /
    • pp.1271-1287
    • /
    • 2022
  • When abnormal operating conditions occur in nuclear power plants, operators must identify the occurrence cause and implement the necessary mitigation measures. Accordingly, the operator must rapidly and accurately analyze the symptom requirements of more than 200 abnormal scenarios from the trends of many variables to perform diagnostic tasks and implement mitigation actions rapidly. However, the probability of human error increases owing to the characteristics of the diagnostic tasks performed by the operator. Researches regarding diagnostic tasks based on Artificial Intelligence (AI) have been conducted recently to reduce the likelihood of human errors; however, reliability issues due to the black box characteristics of AI have been pointed out. Hence, the application of eXplainable Artificial Intelligence (XAI), which can provide AI diagnostic evidence for operators, is considered. In conclusion, the XAI to solve the reliability problem of AI is included in the AI-based diagnostic algorithm. A reliable intelligent diagnostic assistant based on a merged diagnostic algorithm, in the form of an operator support system, is developed, and includes an interface to efficiently inform operators.

Development of an AI-based remaining trip time prediction system for nuclear power plants

  • Sang Won Oh;Ji Hun Park;Hye Seon Jo;Man Gyun Na
    • Nuclear Engineering and Technology
    • /
    • v.56 no.8
    • /
    • pp.3167-3179
    • /
    • 2024
  • In abnormal states of nuclear power plants (NPPs), operators undertake mitigation actions to restore a normal state and prevent reactor trips. However, in abnormal states, the NPP condition fluctuates rapidly, which can lead to human error. If human error occurs, the condition of an NPP can deteriorate, leading to reactor trips. Sudden shutdowns, such as reactor trips, can result in the failure of numerous NPP facilities and economic losses. This study develops a remaining trip time (RTT) prediction system as part of an operator support system to reduce possible human errors and improve the safety of NPPs. The RTT prediction system consists of an algorithm that utilizes artificial intelligence (AI) and explainable AI (XAI) methods, such as autoencoders, light gradient-boosting machines, and Shapley additive explanations. AI methods provide diagnostic information about the abnormal states that occur and predict the remaining time until a reactor trip occurs. The XAI method improves the reliability of AI by providing a rationale for RTT prediction results and information on the main variables of the status of NPPs. The RTT prediction system includes an interface that can effectively provide the results of the system.

Damage Detection and Damage Quantification of Temporary works Equipment based on Explainable Artificial Intelligence (XAI)

  • Cheolhee Lee;Taehoe Koo;Namwook Park;Nakhoon Lim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.11-19
    • /
    • 2024
  • This paper was studied abouta technology for detecting damage to temporary works equipment used in construction sites with explainable artificial intelligence (XAI). Temporary works equipment is mostly composed of steel or aluminum, and it is reused several times due to the characters of the materials in temporary works equipment. However, it sometimes causes accidents at construction sites by using low or decreased quality of temporary works equipment because the regulation and restriction of reuse in them is not strict. Currently, safety rules such as related government laws, standards, and regulations for quality control of temporary works equipment have not been established. Additionally, the inspection results were often different according to the inspector's level of training. To overcome these limitations, a method based with AI and image processing technology was developed. In addition, it was devised by applying explainableartificial intelligence (XAI) technology so that the inspector makes more exact decision with resultsin damage detect with image analysis by the XAI which is a developed AI model for analysis of temporary works equipment. In the experiments, temporary works equipment was photographed with a 4k-quality camera, and the learned artificial intelligence model was trained with 610 labelingdata, and the accuracy was tested by analyzing the image recording data of temporary works equipment. As a result, the accuracy of damage detect by the XAI was 95.0% for the training dataset, 92.0% for the validation dataset, and 90.0% for the test dataset. This was shown aboutthe reliability of the performance of the developed artificial intelligence. It was verified for usability of explainable artificial intelligence to detect damage in temporary works equipment by the experiments. However, to improve the level of commercial software, the XAI need to be trained more by real data set and the ability to detect damage has to be kept or increased when the real data set is applied.

Research on Mining Technology for Explainable Decision Making (설명가능한 의사결정을 위한 마이닝 기술)

  • Kyungyong Chung
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.4
    • /
    • pp.186-191
    • /
    • 2023
  • Data processing techniques play a critical role in decision-making, including handling missing and outlier data, prediction, and recommendation models. This requires a clear explanation of the validity, reliability, and accuracy of all processes and results. In addition, it is necessary to solve data problems through explainable models using decision trees, inference, etc., and proceed with model lightweight by considering various types of learning. The multi-layer mining classification method that applies the sixth principle is a method that discovers multidimensional relationships between variables and attributes that occur frequently in transactions after data preprocessing. This explains how to discover significant relationships using mining on transactions and model the data through regression analysis. It develops scalable models and logistic regression models and proposes mining techniques to generate class labels through data cleansing, relevance analysis, data transformation, and data augmentation to make explanatory decisions.

Corporate Bankruptcy Prediction Model using Explainable AI-based Feature Selection (설명가능 AI 기반의 변수선정을 이용한 기업부실예측모형)

  • Gundoo Moon;Kyoung-jae Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.241-265
    • /
    • 2023
  • A corporate insolvency prediction model serves as a vital tool for objectively monitoring the financial condition of companies. It enables timely warnings, facilitates responsive actions, and supports the formulation of effective management strategies to mitigate bankruptcy risks and enhance performance. Investors and financial institutions utilize default prediction models to minimize financial losses. As the interest in utilizing artificial intelligence (AI) technology for corporate insolvency prediction grows, extensive research has been conducted in this domain. However, there is an increasing demand for explainable AI models in corporate insolvency prediction, emphasizing interpretability and reliability. The SHAP (SHapley Additive exPlanations) technique has gained significant popularity and has demonstrated strong performance in various applications. Nonetheless, it has limitations such as computational cost, processing time, and scalability concerns based on the number of variables. This study introduces a novel approach to variable selection that reduces the number of variables by averaging SHAP values from bootstrapped data subsets instead of using the entire dataset. This technique aims to improve computational efficiency while maintaining excellent predictive performance. To obtain classification results, we aim to train random forest, XGBoost, and C5.0 models using carefully selected variables with high interpretability. The classification accuracy of the ensemble model, generated through soft voting as the goal of high-performance model design, is compared with the individual models. The study leverages data from 1,698 Korean light industrial companies and employs bootstrapping to create distinct data groups. Logistic Regression is employed to calculate SHAP values for each data group, and their averages are computed to derive the final SHAP values. The proposed model enhances interpretability and aims to achieve superior predictive performance.

Study on use of Explainable Artificial Intelligence in Credit Rating (신용평가에서 설명가능 인공지능의 활용에 관한 연구)

  • Young-In Yoon;Seong W. Kim;Hye-Young Jung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.751-756
    • /
    • 2024
  • The accuracy of the model and the explanation of the results are important factors that should be considered simultaneously Recently, applications of explainable artificial intelligence are increasing, and it is especially widely applied in the financial field where interpretation of results is important. In this paper, we compare the performance of open API credit evaluation data using various machine learning techniques. In addition, existing financial logic is verified through explainable artificial intelligence technologies, SHAP and LIME. Accordingly, it is expected to demonstrate the applicability of machine learning in the financial market.

Discovering AI-enabled convergences based on BERT and topic network

  • Ji Min Kim;Seo Yeon Lee;Won Sang Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.1022-1034
    • /
    • 2023
  • Various aspects of artificial intelligence (AI) have become of significant interest to academia and industry in recent times. To satisfy these academic and industrial interests, it is necessary to comprehensively investigate trends in AI-related changes of diverse areas. In this study, we identified and predicted emerging convergences with the help of AI-associated research abstracts collected from the SCOPUS database. The bidirectional encoder representations obtained via the transformers-based topic discovery technique were subsequently deployed to identify emerging topics related to AI. The topics discovered concern edge computing, biomedical algorithms, predictive defect maintenance, medical applications, fake news detection with block chain, explainable AI and COVID-19 applications. Their convergences were further analyzed based on the shortest path between topics to predict emerging convergences. Our findings indicated emerging AI convergences towards healthcare, manufacturing, legal applications, and marketing. These findings are expected to have policy implications for facilitating the convergences in diverse industries. Potentially, this study could contribute to the exploitation and adoption of AI-enabled convergences from a practical perspective.

Autonomous Factory: Future Shape Realized by Manufacturing + AI (제조+AI로 실현되는 미래상: 자율공장)

  • Son, J.Y.;Kim, H.;Lee, E.S.;Park, J.H.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.1
    • /
    • pp.64-70
    • /
    • 2021
  • The future society will be changed through an artificial intelligence (AI) based intelligent revolution. To prepare for the future and strengthen industrial competitiveness, countries around the world are implementing various policies and strategies to utilize AI in the manufacturing industry, which is the basis of the national economy. Manufacturing AI technology should ensure accuracy and reliability in industry and should be explainable, unlike general-purpose AI that targets human intelligence. This paper presents the future shape of the "autonomous factory" through the convergence of manufacturing and AI. In addition, it examines technological issues and research status to realize the autonomous factory during the stages of recognition, planning, execution, and control of manufacturing work.

Evaluation of Data-based Expansion Joint-gap for Digital Maintenance (디지털 유지관리를 위한 데이터 기반 교량 신축이음 유간 평가 )

  • Jongho Park;Yooseong Shin
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.2
    • /
    • pp.1-8
    • /
    • 2024
  • The expansion joint is installed to offset the expansion of the superstructure and must ensure sufficient gap during its service life. In detailed guideline of safety inspection and precise safety diagnosis for bridge, damage due to lack or excessive gap is specified, but there are insufficient standards for determining the abnormal behavior of superstructures. In this study, a data-based maintenance was proposed by continuously monitoring the expansion-gap data of the same expansion joint. A total of 2,756 data were collected from 689 expansion joint, taking into account the effects of season. We have developed a method to evaluate changes in the expansion joint-gap that can analyze the thermal movement through four or more data at the same location, and classified the factors that affect the superstructure behavior and analyze the influence of each factor through deep learning and explainable artificial intelligence(AI). Abnormal behavior of the superstructure was classified into narrowing and functional failure through the expansion joint-gap evaluation graph. The influence factor analysis using deep learning and explainable AI is considered to be reliable because the results can be explained by the existing expansion gap calculation formula and bridge design.