• Title/Summary/Keyword: (ML) Machine learning

Search Result 290, Processing Time 0.022 seconds

Using machine learning to forecast and assess the uncertainty in the response of a typical PWR undergoing a steam generator tube rupture accident

  • Tran Canh Hai Nguyen ;Aya Diab
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3423-3440
    • /
    • 2023
  • In this work, a multivariate time-series machine learning meta-model is developed to predict the transient response of a typical nuclear power plant (NPP) undergoing a steam generator tube rupture (SGTR). The model employs Recurrent Neural Networks (RNNs), including the Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and a hybrid CNN-LSTM model. To address the uncertainty inherent in such predictions, a Bayesian Neural Network (BNN) was implemented. The models were trained using a database generated by the Best Estimate Plus Uncertainty (BEPU) methodology; coupling the thermal hydraulics code, RELAP5/SCDAP/MOD3.4 to the statistical tool, DAKOTA, to predict the variation in system response under various operational and phenomenological uncertainties. The RNN models successfully captures the underlying characteristics of the data with reasonable accuracy, and the BNN-LSTM approach offers an additional layer of insight into the level of uncertainty associated with the predictions. The results demonstrate that LSTM outperforms GRU, while the hybrid CNN-LSTM model is computationally the most efficient. This study aims to gain a better understanding of the capabilities and limitations of machine learning models in the context of nuclear safety. By expanding the application of ML models to more severe accident scenarios, where operators are under extreme stress and prone to errors, ML models can provide valuable support and act as expert systems to assist in decision-making while minimizing the chances of human error.

Hybrid machine learning with moth-flame optimization methods for strength prediction of CFDST columns under compression

  • Quang-Viet Vu;Dai-Nhan Le;Thai-Hoan Pham;Wei Gao;Sawekchai Tangaramvong
    • Steel and Composite Structures
    • /
    • v.51 no.6
    • /
    • pp.679-695
    • /
    • 2024
  • This paper presents a novel technique that combines machine learning (ML) with moth-flame optimization (MFO) methods to predict the axial compressive strength (ACS) of concrete filled double skin steel tubes (CFDST) columns. The proposed model is trained and tested with a dataset containing 125 tests of the CFDST column subjected to compressive loading. Five ML models, including extreme gradient boosting (XGBoost), gradient tree boosting (GBT), categorical gradient boosting (CAT), support vector machines (SVM), and decision tree (DT) algorithms, are utilized in this work. The MFO algorithm is applied to find optimal hyperparameters of these ML models and to determine the most effective model in predicting the ACS of CFDST columns. Predictive results given by some performance metrics reveal that the MFO-CAT model provides superior accuracy compared to other considered models. The accuracy of the MFO-CAT model is validated by comparing its predictive results with existing design codes and formulae. Moreover, the significance and contribution of each feature in the dataset are examined by employing the SHapley Additive exPlanations (SHAP) method. A comprehensive uncertainty quantification on probabilistic characteristics of the ACS of CFDST columns is conducted for the first time to examine the models' responses to variations of input variables in the stochastic environments. Finally, a web-based application is developed to predict ACS of the CFDST column, enabling rapid practical utilization without requesting any programing or machine learning expertise.

Towards Effective Analysis and Tracking of Mozilla and Eclipse Defects using Machine Learning Models based on Bugs Data

  • Hassan, Zohaib;Iqbal, Naeem;Zaman, Abnash
    • Soft Computing and Machine Intelligence
    • /
    • v.1 no.1
    • /
    • pp.1-10
    • /
    • 2021
  • Analysis and Tracking of bug reports is a challenging field in software repositories mining. It is one of the fundamental ways to explores a large amount of data acquired from defect tracking systems to discover patterns and valuable knowledge about the process of bug triaging. Furthermore, bug data is publically accessible and available of the following systems, such as Bugzilla and JIRA. Moreover, with robust machine learning (ML) techniques, it is quite possible to process and analyze a massive amount of data for extracting underlying patterns, knowledge, and insights. Therefore, it is an interesting area to propose innovative and robust solutions to analyze and track bug reports originating from different open source projects, including Mozilla and Eclipse. This research study presents an ML-based classification model to analyze and track bug defects for enhancing software engineering management (SEM) processes. In this work, Artificial Neural Network (ANN) and Naive Bayesian (NB) classifiers are implemented using open-source bug datasets, such as Mozilla and Eclipse. Furthermore, different evaluation measures are employed to analyze and evaluate the experimental results. Moreover, a comparative analysis is given to compare the experimental results of ANN with NB. The experimental results indicate that the ANN achieved high accuracy compared to the NB. The proposed research study will enhance SEM processes and contribute to the body of knowledge of the data mining field.

A SE Approach for Real-Time NPP Response Prediction under CEA Withdrawal Accident Conditions

  • Felix Isuwa, Wapachi;Aya, Diab
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.18 no.2
    • /
    • pp.75-93
    • /
    • 2022
  • Machine learning (ML) data-driven meta-model is proposed as a surrogate model to reduce the excessive computational cost of the physics-based model and facilitate the real-time prediction of a nuclear power plant's transient response. To forecast the transient response three machine learning (ML) meta-models based on recurrent neural networks (RNNs); specifically, Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU), and a sequence combination of Convolutional Neural Network (CNN) and LSTM are developed. The chosen accident scenario is a control element assembly withdrawal at power concurrent with the Loss Of Offsite Power (LOOP). The transient response was obtained using the best estimate thermal hydraulics code, MARS-KS, and cross-validated against the Design and control document (DCD). DAKOTA software is loosely coupled with MARS-KS code via a python interface to perform the Best Estimate Plus Uncertainty Quantification (BEPU) analysis and generate a time series database of the system response to train, test and validate the ML meta-models. Key uncertain parameters identified as required by the CASU methodology were propagated using the non-parametric Monte-Carlo (MC) random propagation and Latin Hypercube Sampling technique until a statistically significant database (181 samples) as required by Wilk's fifth order is achieved with 95% probability and 95% confidence level. The three ML RNN models were built and optimized with the help of the Talos tool and demonstrated excellent performance in forecasting the most probable NPP transient response. This research was guided by the Systems Engineering (SE) approach for the systematic and efficient planning and execution of the research.

5G Network Resource Allocation and Traffic Prediction based on DDPG and Federated Learning (DDPG 및 연합학습 기반 5G 네트워크 자원 할당과 트래픽 예측)

  • Seok-Woo Park;Oh-Sung Lee;In-Ho Ra
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.33-48
    • /
    • 2024
  • With the advent of 5G, characterized by Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), efficient network management and service provision are becoming increasingly critical. This paper proposes a novel approach to address key challenges of 5G networks, namely ultra-high speed, ultra-low latency, and ultra-reliability, while dynamically optimizing network slicing and resource allocation using machine learning (ML) and deep learning (DL) techniques. The proposed methodology utilizes prediction models for network traffic and resource allocation, and employs Federated Learning (FL) techniques to simultaneously optimize network bandwidth, latency, and enhance privacy and security. Specifically, this paper extensively covers the implementation methods of various algorithms and models such as Random Forest and LSTM, thereby presenting methodologies for the automation and intelligence of 5G network operations. Finally, the performance enhancement effects achievable by applying ML and DL to 5G networks are validated through performance evaluation and analysis, and solutions for network slicing and resource management optimization are proposed for various industrial applications.

Machine Learning-based SOH Estimation Algorithm Using a Linear Regression Analysis (선형 회귀 분석법을 이용한 머신 러닝 기반의 SOH 추정 알고리즘)

  • Kang, Seung-Hyun;Noh, Tae-Won;Lee, Byoung-Kuk
    • The Transactions of the Korean Institute of Power Electronics
    • /
    • v.26 no.4
    • /
    • pp.241-248
    • /
    • 2021
  • A battery state-of-health (SOH) estimation algorithm using a machine learning-based linear regression method is proposed for estimating battery aging. The proposed algorithm analyzes the change trend of the open-circuit voltage (OCV) curve, which is a parameter related to SOH. At this time, a section with high linearity of the SOH and OCV curves is selected and used for SOH estimation. The SOH of the aged battery is estimated according to the selected interval using a machine learning-based linear regression method. The performance of the proposed battery SOH estimation algorithm is verified through experiments and simulations using battery packs for electric vehicles.

Leveraging Big Data for Spark Deep Learning to Predict Rating

  • Mishra, Monika;Kang, Mingoo;Woo, Jongwook
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.33-39
    • /
    • 2020
  • The paper is to build recommendation systems leveraging Deep Learning and Big Data platform, Spark to predict item ratings of the Amazon e-commerce site. Recommendation system in e-commerce has become extremely popular in recent years and it is very important for both customers and sellers in daily life. It means providing the users with products and services they are interested in. Therecommendation systems need users' previous shopping activities and digital footprints to make best recommendation purpose for next item shopping. We developed the recommendation models in Amazon AWS Cloud services to predict the users' ratings for the items with the massive data set of Amazon customer reviews. We also present Big Data architecture to afford the large scale data set for storing and computation. And, we adopted deep learning for machine learning community as it is known that it has higher accuracy for the massive data set. In the end, a comparative conclusion in terms of the accuracy as well as the performance is illustrated with the Deep Learning architecture with Spark ML and the traditional Big Data architecture, Spark ML alone.

A Nature-inspired Multiple Kernel Extreme Learning Machine Model for Intrusion Detection

  • Shen, Yanping;Zheng, Kangfeng;Wu, Chunhua;Yang, Yixian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.702-723
    • /
    • 2020
  • The application of machine learning (ML) in intrusion detection has attracted much attention with the rapid growth of information security threat. As an efficient multi-label classifier, kernel extreme learning machine (KELM) has been gradually used in intrusion detection system. However, the performance of KELM heavily relies on the kernel selection. In this paper, a novel multiple kernel extreme learning machine (MKELM) model combining the ReliefF with nature-inspired methods is proposed for intrusion detection. The MKELM is designed to estimate whether the attack is carried out and the ReliefF is used as a preprocessor of MKELM to select appropriate features. In addition, the nature-inspired methods whose fitness functions are defined based on the kernel alignment are employed to build the optimal composite kernel in the MKELM. The KDD99, NSL and Kyoto datasets are used to evaluate the performance of the model. The experimental results indicate that the optimal composite kernel function can be determined by using any heuristic optimization method, including PSO, GA, GWO, BA and DE. Since the filter-based feature selection method is combined with the multiple kernel learning approach independent of the classifier, the proposed model can have a good performance while saving a lot of training time.

Machine-Learning-Based Link Adaptation for Energy-Efficient MIMO-OFDM Systems (MIMO-OFDM 시스템에서 에너지 효율성을 위한 기계 학습 기반 적응형 전송 기술 및 Feature Space 연구)

  • Oh, Myeung Suk;Kim, Gibum;Park, Hyuncheol
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.27 no.5
    • /
    • pp.407-415
    • /
    • 2016
  • Recent wireless communication trends have emphasized the importance of energy-efficient transmission. In this paper, link adaptation with machine learning mechanism for maximum energy efficiency in multiple-input multiple-output orthogonal frequency division multiplexing(MIMO-OFDM) wireless system is considered. For reflecting frequency-selective MIMO-OFDM channels, two-dimensional capacity(2D-CAP) feature space is proposed. In addition, machine-learning-based bit and power adaptation(ML-BPA) algorithm that performs classification-based link adaptation is presented. Simulation results show that 2D-CAP feature space can represent channel conditions accurately and bring noticeable improvement in link adaptation performance. Compared with other feature spaces, including ordered postprocessing signal-to-noise ratio(ordSNR) feature space, 2D-CAP has distinguished advantages in either efficiency performance or computational complexity.

Lightweight Named Entity Extraction for Korean Short Message Service Text

  • Seon, Choong-Nyoung;Yoo, Jin-Hwan;Kim, Hark-Soo;Kim, Ji-Hwan;Seo, Jung-Yun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.3
    • /
    • pp.560-574
    • /
    • 2011
  • In this paper, we propose a hybrid method of Machine Learning (ML) algorithm and a rule-based algorithm to implement a lightweight Named Entity (NE) extraction system for Korean SMS text. NE extraction from Korean SMS text is a challenging theme due to the resource limitation on a mobile phone, corruptions in input text, need for extension to include personal information stored in a mobile phone, and sparsity of training data. The proposed hybrid method retaining the advantages of statistical ML and rule-based algorithms provides fully-automated procedures for the combination of ML approaches and their correction rules using a threshold-based soft decision function. The proposed method is applied to Korean SMS texts to extract person's names as well as location names which are key information in personal appointment management system. Our proposed system achieved 80.53% in F-measure in this domain, superior to those of the conventional ML approaches.