• Title/Summary/Keyword: surrogate machine learning

Search Result 16, Processing Time 0.024 seconds

Explainable Artificial Intelligence (XAI) Surrogate Models for Chemical Process Design and Analysis (화학 공정 설계 및 분석을 위한 설명 가능한 인공지능 대안 모델)

  • Yuna Ko;Jonggeol Na
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.542-549
    • /
    • 2023
  • Since the growing interest in surrogate modeling, there has been continuous research aimed at simulating nonlinear chemical processes using data-driven machine learning. However, the opaque nature of machine learning models, which limits their interpretability, poses a challenge for their practical application in industry. Therefore, this study aims to analyze chemical processes using Explainable Artificial Intelligence (XAI), a concept that improves interpretability while ensuring model accuracy. While conventional sensitivity analysis of chemical processes has been limited to calculating and ranking the sensitivity indices of variables, we propose a methodology that utilizes XAI to not only perform global and local sensitivity analysis, but also examine the interactions among variables to gain physical insights from the data. For the ammonia synthesis process, which is the target process of the case study, we set the temperature of the preheater leading to the first reactor and the split ratio of the cold shot to the three reactors as process variables. By integrating Matlab and Aspen Plus, we obtained data on ammonia production and the maximum temperatures of the three reactors while systematically varying the process variables. We then trained tree-based models and performed sensitivity analysis using the SHAP technique, one of the XAI methods, on the most accurate model. The global sensitivity analysis showed that the preheater temperature had the greatest effect, and the local sensitivity analysis provided insights for defining the ranges of process variables to improve productivity and prevent overheating. By constructing alternative models for chemical processes and using XAI for sensitivity analysis, this work contributes to providing both quantitative and qualitative feedback for process optimization.

A SE Approach for Real-Time NPP Response Prediction under CEA Withdrawal Accident Conditions

  • Felix Isuwa, Wapachi;Aya, Diab
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.18 no.2
    • /
    • pp.75-93
    • /
    • 2022
  • Machine learning (ML) data-driven meta-model is proposed as a surrogate model to reduce the excessive computational cost of the physics-based model and facilitate the real-time prediction of a nuclear power plant's transient response. To forecast the transient response three machine learning (ML) meta-models based on recurrent neural networks (RNNs); specifically, Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU), and a sequence combination of Convolutional Neural Network (CNN) and LSTM are developed. The chosen accident scenario is a control element assembly withdrawal at power concurrent with the Loss Of Offsite Power (LOOP). The transient response was obtained using the best estimate thermal hydraulics code, MARS-KS, and cross-validated against the Design and control document (DCD). DAKOTA software is loosely coupled with MARS-KS code via a python interface to perform the Best Estimate Plus Uncertainty Quantification (BEPU) analysis and generate a time series database of the system response to train, test and validate the ML meta-models. Key uncertain parameters identified as required by the CASU methodology were propagated using the non-parametric Monte-Carlo (MC) random propagation and Latin Hypercube Sampling technique until a statistically significant database (181 samples) as required by Wilk's fifth order is achieved with 95% probability and 95% confidence level. The three ML RNN models were built and optimized with the help of the Talos tool and demonstrated excellent performance in forecasting the most probable NPP transient response. This research was guided by the Systems Engineering (SE) approach for the systematic and efficient planning and execution of the research.

Effect of input variable characteristics on the performance of an ensemble machine learning model for algal bloom prediction (앙상블 머신러닝 모형을 이용한 하천 녹조발생 예측모형의 입력변수 특성에 따른 성능 영향)

  • Kang, Byeong-Koo;Park, Jungsu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.6
    • /
    • pp.417-424
    • /
    • 2021
  • Algal bloom is an ongoing issue in the management of freshwater systems for drinking water supply, and the chlorophyll-a concentration is commonly used to represent the status of algal bloom. Thus, the prediction of chlorophyll-a concentration is essential for the proper management of water quality. However, the chlorophyll-a concentration is affected by various water quality and environmental factors, so the prediction of its concentration is not an easy task. In recent years, many advanced machine learning algorithms have increasingly been used for the development of surrogate models to prediction the chlorophyll-a concentration in freshwater systems such as rivers or reservoirs. This study used a light gradient boosting machine(LightGBM), a gradient boosting decision tree algorithm, to develop an ensemble machine learning model to predict chlorophyll-a concentration. The field water quality data observed at Daecheong Lake, obtained from the real-time water information system in Korea, were used for the development of the model. The data include temperature, pH, electric conductivity, dissolved oxygen, total organic carbon, total nitrogen, total phosphorus, and chlorophyll-a. First, a LightGBM model was developed to predict the chlorophyll-a concentration by using the other seven items as independent input variables. Second, the time-lagged values of all the input variables were added as input variables to understand the effect of time lag of input variables on model performance. The time lag (i) ranges from 1 to 50 days. The model performance was evaluated using three indices, root mean squared error-observation standard deviation ration (RSR), Nash-Sutcliffe coefficient of efficiency (NSE) and mean absolute error (MAE). The model showed the best performance by adding a dataset with a one-day time lag (i=1) where RSR, NSE, and MAE were 0.359, 0.871 and 1.510, respectively. The improvement of model performance was observed when a dataset with a time lag up of about 15 days (i=15) was added.

Application of POD reduced-order algorithm on data-driven modeling of rod bundle

  • Kang, Huilun;Tian, Zhaofei;Chen, Guangliang;Li, Lei;Wang, Tianyu
    • Nuclear Engineering and Technology
    • /
    • v.54 no.1
    • /
    • pp.36-48
    • /
    • 2022
  • As a valid numerical method to obtain a high-resolution result of a flow field, computational fluid dynamics (CFD) have been widely used to study coolant flow and heat transfer characteristics in fuel rod bundles. However, the time-consuming, iterative calculation of Navier-Stokes equations makes CFD unsuitable for the scenarios that require efficient simulation such as sensitivity analysis and uncertainty quantification. To solve this problem, a reduced-order model (ROM) based on proper orthogonal decomposition (POD) and machine learning (ML) is proposed to simulate the flow field efficiently. Firstly, a validated CFD model to output the flow field data set of the rod bundle is established. Secondly, based on the POD method, the modes and corresponding coefficients of the flow field were extracted. Then, an deep feed-forward neural network, due to its efficiency in approximating arbitrary functions and its ability to handle high-dimensional and strong nonlinear problems, is selected to build a model that maps the non-linear relationship between the mode coefficients and the boundary conditions. A trained surrogate model for modes coefficients prediction is obtained after a certain number of training iterations. Finally, the flow field is reconstructed by combining the product of the POD basis and coefficients. Based on the test dataset, an evaluation of the ROM is carried out. The evaluation results show that the proposed POD-ROM accurately describe the flow status of the fluid field in rod bundles with high resolution in only a few milliseconds.

Reconstruction of wind speed fields in mountainous areas using a full convolutional neural network

  • Ruifang Shen;Bo Li;Ke Li;Bowen Yan;Yuanzhao Zhang
    • Wind and Structures
    • /
    • v.38 no.4
    • /
    • pp.231-244
    • /
    • 2024
  • As wind farms expand into low wind speed areas, an increasing number are being established in mountainous regions. To fully utilize wind energy resources, it is essential to understand the details of mountain flow fields. Reconstructing the wind speed field in complex terrain is crucial for planning, designing, operation of wind farms, which impacts the wind farm's profits throughout its life cycle. Currently, wind speed reconstruction is primarily achieved through physical and machine learning methods. However, physical methods often require significant computational costs. Therefore, we propose a Full Convolutional Neural Network (FCNN)-based reconstruction method for mountain wind velocity fields to evaluate wind resources more accurately and efficiently. This method establishes the mapping relation between terrain, wind angle, height, and corresponding velocity fields of three velocity components within a specific terrain range. Guided by this mapping relation, wind velocity fields of three components at different terrains, wind angles, and heights can be generated. The effectiveness of this method was demonstrated by reconstructing the wind speed field of complex terrain in Beijing.

Domain Knowledge Incorporated Local Rule-based Explanation for ML-based Bankruptcy Prediction Model (머신러닝 기반 부도예측모형에서 로컬영역의 도메인 지식 통합 규칙 기반 설명 방법)

  • Soo Hyun Cho;Kyung-shik Shin
    • Information Systems Review
    • /
    • v.24 no.1
    • /
    • pp.105-123
    • /
    • 2022
  • Thanks to the remarkable success of Artificial Intelligence (A.I.) techniques, a new possibility for its application on the real-world problem has begun. One of the prominent applications is the bankruptcy prediction model as it is often used as a basic knowledge base for credit scoring models in the financial industry. As a result, there has been extensive research on how to improve the prediction accuracy of the model. However, despite its impressive performance, it is difficult to implement machine learning (ML)-based models due to its intrinsic trait of obscurity, especially when the field requires or values an explanation about the result obtained by the model. The financial domain is one of the areas where explanation matters to stakeholders such as domain experts and customers. In this paper, we propose a novel approach to incorporate financial domain knowledge into local rule generation to provide explanations for the bankruptcy prediction model at instance level. The result shows the proposed method successfully selects and classifies the extracted rules based on the feasibility and information they convey to the users.