• Title/Summary/Keyword: Explainable artificial intelligence (XAI)

Search Result 38, Processing Time 0.019 seconds

Development of an AI-based remaining trip time prediction system for nuclear power plants

  • Sang Won Oh;Ji Hun Park;Hye Seon Jo;Man Gyun Na
    • Nuclear Engineering and Technology
    • /
    • v.56 no.8
    • /
    • pp.3167-3179
    • /
    • 2024
  • In abnormal states of nuclear power plants (NPPs), operators undertake mitigation actions to restore a normal state and prevent reactor trips. However, in abnormal states, the NPP condition fluctuates rapidly, which can lead to human error. If human error occurs, the condition of an NPP can deteriorate, leading to reactor trips. Sudden shutdowns, such as reactor trips, can result in the failure of numerous NPP facilities and economic losses. This study develops a remaining trip time (RTT) prediction system as part of an operator support system to reduce possible human errors and improve the safety of NPPs. The RTT prediction system consists of an algorithm that utilizes artificial intelligence (AI) and explainable AI (XAI) methods, such as autoencoders, light gradient-boosting machines, and Shapley additive explanations. AI methods provide diagnostic information about the abnormal states that occur and predict the remaining time until a reactor trip occurs. The XAI method improves the reliability of AI by providing a rationale for RTT prediction results and information on the main variables of the status of NPPs. The RTT prediction system includes an interface that can effectively provide the results of the system.

Fault diagnosis of linear transfer robot using XAI

  • Taekyung Kim;Arum Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.121-138
    • /
    • 2024
  • Artificial intelligence is crucial to manufacturing productivity. Understanding the difficulties in producing disruptions, especially in linear feed robot systems, is essential for efficient operations. These mechanical tools, essential for linear movements within systems, are prone to damage and degradation, especially in the LM guide, due to repetitive motions. We examine how explainable artificial intelligence (XAI) may diagnose wafer linear robot linear rail clearance and ball screw clearance anomalies. XAI helps diagnose problems and explain anomalies, enriching management and operational strategies. By interpreting the reasons for anomaly detection through visualizations such as Class Activation Maps (CAMs) using technologies like Grad-CAM, FG-CAM, and FFT-CAM, and comparing 1D-CNN with 2D-CNN, we illustrates the potential of XAI in enhancing diagnostic accuracy. The use of datasets from accelerometer and torque sensors in our experiments validates the high accuracy of the proposed method in binary and ternary classifications. This study exemplifies how XAI can elucidate deep learning models trained on industrial signals, offering a practical approach to understanding and applying AI in maintaining the integrity of critical components such as LM guides in linear feed robots.

Explainable Artificial Intelligence Applied in Deep Learning for Review Helpfulness Prediction (XAI 기법을 이용한 리뷰 유용성 예측 결과 설명에 관한 연구)

  • Dongyeop Ryu;Xinzhe Li;Jaekyeong Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.35-56
    • /
    • 2023
  • With the development of information and communication technology, numerous reviews are continuously posted on websites, which causes information overload problems. Therefore, users face difficulty in exploring reviews for their decision-making. To solve such a problem, many studies on review helpfulness prediction have been actively conducted to provide users with helpful and reliable reviews. Existing studies predict review helpfulness mainly based on the features included in the review. However, such studies disable providing the reason why predicted reviews are helpful. Therefore, this study aims to propose a methodology for applying eXplainable Artificial Intelligence (XAI) techniques in review helpfulness prediction to address such a limitation. This study uses restaurant reviews collected from Yelp.com to compare the prediction performance of six models widely used in previous studies. Next, we propose an explainable review helpfulness prediction model by applying the XAI technique to the model with the best prediction performance. Therefore, the methodology proposed in this study can recommend helpful reviews in the user's purchasing decision-making process and provide the interpretation of why such predicted reviews are helpful.

Development of ensemble machine learning model considering the characteristics of input variables and the interpretation of model performance using explainable artificial intelligence (수질자료의 특성을 고려한 앙상블 머신러닝 모형 구축 및 설명가능한 인공지능을 이용한 모형결과 해석에 대한 연구)

  • Park, Jungsu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.36 no.4
    • /
    • pp.239-248
    • /
    • 2022
  • The prediction of algal bloom is an important field of study in algal bloom management, and chlorophyll-a concentration(Chl-a) is commonly used to represent the status of algal bloom. In, recent years advanced machine learning algorithms are increasingly used for the prediction of algal bloom. In this study, XGBoost(XGB), an ensemble machine learning algorithm, was used to develop a model to predict Chl-a in a reservoir. The daily observation of water quality data and climate data was used for the training and testing of the model. In the first step of the study, the input variables were clustered into two groups(low and high value groups) based on the observed value of water temperature(TEMP), total organic carbon concentration(TOC), total nitrogen concentration(TN) and total phosphorus concentration(TP). For each of the four water quality items, two XGB models were developed using only the data in each clustered group(Model 1). The results were compared to the prediction of an XGB model developed by using the entire data before clustering(Model 2). The model performance was evaluated using three indices including root mean squared error-observation standard deviation ratio(RSR). The model performance was improved using Model 1 for TEMP, TN, TP as the RSR of each model was 0.503, 0.477 and 0.493, respectively, while the RSR of Model 2 was 0.521. On the other hand, Model 2 shows better performance than Model 1 for TOC, where the RSR was 0.532. Explainable artificial intelligence(XAI) is an ongoing field of research in machine learning study. Shapley value analysis, a novel XAI algorithm, was also used for the quantitative interpretation of the XGB model performance developed in this study.

Performance improvement of artificial neural network based water quality prediction model using explainable artificial intelligence technology (설명가능한 인공지능 기술을 이용한 인공신경망 기반 수질예측 모델의 성능향상)

  • Lee, Won Jin;Lee, Eui Hoon
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.11
    • /
    • pp.801-813
    • /
    • 2023
  • Recently, as studies about Artificial Neural Network (ANN) are actively progressing, studies for predicting water quality of rivers using ANN are being conducted. However, it is difficult to analyze the operation process inside ANN, because ANN is form of Black-box. Although eXplainable Artificial Intelligence (XAI) is used to analyze the computational process of ANN, research using XAI technology in the field of water resources is insufficient. This study analyzed Multi Layer Perceptron (MLP) to predict Water Temperature (WT), Dissolved Oxygen (DO), hydrogen ion concentration (pH) and Chlorophyll-a (Chl-a) at the Dasan water quality observatory in the Nakdong river using Layer-wise Relevance Propagation (LRP) among XAI technologies. The MLP that learned water quality was analyzed using LRP to select the optimal input data to predict water quality, and the prediction results of the MLP learned using the optimal input data were analyzed. As a result of selecting the optimal input data using LRP, the prediction accuracy of MLP, which learned the input data except daily precipitation in the surrounding area, was the highest. Looking at the analysis of MLP's DO prediction results, it was analyzed that the pH and DO a had large influence at the highest point, and the effect of WT was large at the lowest point.

A Study on the Remaining Useful Life Prediction Performance Variation based on Identification and Selection by using SHAP (SHAP를 활용한 중요변수 파악 및 선택에 따른 잔여유효수명 예측 성능 변동에 대한 연구)

  • Yoon, Yeon Ah;Lee, Seung Hoon;Kim, Yong Soo
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.4
    • /
    • pp.1-11
    • /
    • 2021
  • Recently, the importance of preventive maintenance has been emerging since failures in a complex system are automatically detected due to the development of artificial intelligence techniques and sensor technology. Therefore, prognostic and health management (PHM) is being actively studied, and prediction of the remaining useful life (RUL) of the system is being one of the most important tasks. A lot of researches has been conducted to predict the RUL. Deep learning models have been developed to improve prediction performance, but studies on identifying the importance of features are not carried out. It is very meaningful to extract and interpret features that affect failures while improving the predictive accuracy of RUL is important. In this paper, a total of six popular deep learning models were employed to predict the RUL, and identified important variables for each model through SHAP (Shapley Additive explanations) that one of the explainable artificial intelligence (XAI). Moreover, the fluctuations and trends of prediction performance according to the number of variables were identified. This paper can suggest the possibility of explainability of various deep learning models, and the application of XAI can be demonstrated. Also, through this proposed method, it is expected that the possibility of utilizing SHAP as a feature selection method.

Application of XAI Models to Determine Employment Factors in the Software Field : with focus on University and Vocational College Graduates (소프트웨어 분야 취업 결정 요인에 대한 XAI 모델 적용 연구 : 일반대학교와 전문대학 졸업자를 중심으로)

  • Kwon Joonhee;Kim Sungrim
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.20 no.1
    • /
    • pp.31-45
    • /
    • 2024
  • The purpose of this study is to explain employment factors in the software field. For it, the Graduates Occupational Mobility Survey by the Korea employment information service is used. This paper proposes employment models in the software field using machine learning. Then, it explains employment factors of the models using explainable artificial intelligence. The models focus on both university graduates and vocational college graduates. Our works explain and interpret both black box model and glass box model. The SHAP and EBM explanation are used to interpret black box model and glass box model, respectively. The results describes that positive employment impact factors are major, vocational education and training, employment preparation setting semester, and intern experience in the employment models. This study provides a job preparation guide to universitiy and vocational college students that want to work in software field.

Application of Explainable Artificial Intelligence for Predicting Hardness of AlSi10Mg Alloy Manufactured by Laser Powder Bed Fusion (레이저 분말 베드 용융법으로 제조된 AlSi10Mg 합금의 경도 예측을 위한 설명 가능한 인공지능 활용)

  • Junhyub Jeon;Namhyuk Seo;Min-Su Kim;Seung Bae Son;Jae-Gil Jung;Seok-Jae Lee
    • Journal of Powder Materials
    • /
    • v.30 no.3
    • /
    • pp.210-216
    • /
    • 2023
  • In this study, machine learning models are proposed to predict the Vickers hardness of AlSi10Mg alloys fabricated by laser powder bed fusion (LPBF). A total of 113 utilizable datasets were collected from the literature. The hyperparameters of the machine-learning models were adjusted to select an accurate predictive model. The random forest regression (RFR) model showed the best performance compared to support vector regression, artificial neural networks, and k-nearest neighbors. The variable importance and prediction mechanisms of the RFR were discussed by Shapley additive explanation (SHAP). Aging time had the greatest influence on the Vickers hardness, followed by solution time, solution temperature, layer thickness, scan speed, power, aging temperature, average particle size, and hatching distance. Detailed prediction mechanisms for RFR are analyzed using SHAP dependence plots.

Study on Heat Energy Consumption Forecast and Efficiency Mediated Explainable Artificial Intelligence (XAI) (설명 가능한 인공지능 매개 에너지 수요 예측 및 효율성 연구)

  • Shin, Jihye;Kim, Yunjae;Lee, Sujin;Moon, Hyeonjoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1218-1221
    • /
    • 2022
  • 최근 전세계의 탄소중립 요구에 따른 에너지 효율 증대를 통한 에너지 절감을 위한 효율성 관련 연구가 확대되고 있다. 방송과 미디어 분야에는 에너지 효율이 더욱 시급하다. 이에 본 연구에서는 효율적인 에너지 시스템 구축을 위해 난방 에너지 시계열 데이터를 기반으로 한 수요 예측 모델을 선정하고, 설명하는 인공지능 모델을 도입하여 수요 예측에 영향을 미치는 원인을 파악하는 프레임워크를 제안한다.

  • PDF

A Proposal of Sensor-based Time Series Classification Model using Explainable Convolutional Neural Network

  • Jang, Youngjun;Kim, Jiho;Lee, Hongchul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.55-67
    • /
    • 2022
  • Sensor data can provide fault diagnosis for equipment. However, the cause analysis for fault results of equipment is not often provided. In this study, we propose an explainable convolutional neural network framework for the sensor-based time series classification model. We used sensor-based time series dataset, acquired from vehicles equipped with sensors, and the Wafer dataset, acquired from manufacturing process. Moreover, we used Cycle Signal dataset, acquired from real world mechanical equipment, and for Data augmentation methods, scaling and jittering were used to train our deep learning models. In addition, our proposed classification models are convolutional neural network based models, FCN, 1D-CNN, and ResNet, to compare evaluations for each model. Our experimental results show that the ResNet provides promising results in the context of time series classification with accuracy and F1 Score reaching 95%, improved by 3% compared to the previous study. Furthermore, we propose XAI methods, Class Activation Map and Layer Visualization, to interpret the experiment result. XAI methods can visualize the time series interval that shows important factors for sensor data classification.