• Title/Summary/Keyword: Ensemble Machine Learning Models

Search Result 138, Processing Time 0.021 seconds

An Ensemble Model for Machine Failure Prediction (앙상블 모델 기반의 기계 고장 예측 방법)

  • Cheon, Kang Min;Yang, Jaekyung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.1
    • /
    • pp.123-131
    • /
    • 2020
  • There have been a lot of studies in the past for the method of predicting the failure of a machine, and recently, a lot of researches and applications have been generated to diagnose the physical condition of the machine and the parts and to calculate the remaining life through various methods. Survival models are also used to predict plant failures based on past anomaly cycles. In particular, special machine that reflect the fluid flow and process characteristics of chemical plants are connected to hundreds or thousands of sensors, so there are not many factors that need to be considered, such as process and material data as well as application of derivative variables. In this paper, the data were preprocessed through time series anomaly detection based on unsupervised learning to predict the abnormalities of these special machine. Next, clustering results reflecting clustering-based data characteristics were applied to produce additional variables, and a learning data set was created based on the history of past facility abnormalities. Finally, the prediction methodology based on the supervised learning algorithm was applied, and the model update was confirmed to improve the accuracy of the prediction of facility failure. Through this, it is expected to improve the efficiency of facility operation by flexibly replacing the maintenance time and parts supply and demand by predicting abnormalities of machine and extracting key factors.

Comparison of Heart Failure Prediction Performance Using Various Machine Learning Techniques

  • ByungJoo Kim
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.4
    • /
    • pp.290-300
    • /
    • 2024
  • This study presents a comprehensive evaluation of various machine learning models for predicting heart failure outcomes. Leveraging a data set of clinical records, the performance of Logistic Regression, Support Vector Machine (SVM), Random Forest, Soft Voting ensemble, and XGBoost models are rigorously assessed using multiple evaluation metrics, including accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC). The analysis reveals that the XGBoost model outperforms the other techniques across all metrics, exhibiting the highest AUC score, indicating superior discriminative ability in distinguishing between patients with and without heart failure. Furthermore, the study highlights the importance of feature importance analysis provided by XGBoost, offering valuable insights into the most influential predictors of heart failure, which can inform clinical decision-making and patient management strategies. The research also underscores the significance of balancing precision and recall, as reflected by the F1-score, in medical applications to minimize the consequences of false negatives.

Deep Learning-Based Brain Tumor Classification in MRI images using Ensemble of Deep Features

  • Kang, Jaeyong;Gwak, Jeonghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.37-44
    • /
    • 2021
  • Automatic classification of brain MRI images play an important role in early diagnosis of brain tumors. In this work, we present a deep learning-based brain tumor classification model in MRI images using ensemble of deep features. In our proposed framework, three different deep features from brain MR image are extracted using three different pre-trained models. After that, the extracted deep features are fed to the classification module. In the classification module, the three different deep features are first fed into the fully-connected layers individually to reduce the dimension of the features. After that, the output features from the fully-connected layers are concatenated and fed into the fully-connected layer to predict the final output. To evaluate our proposed model, we use openly accessible brain MRI dataset from web. Experimental results show that our proposed model outperforms other machine learning-based models.

Development of Prediction Model of Chloride Diffusion Coefficient using Machine Learning (기계학습을 이용한 염화물 확산계수 예측모델 개발)

  • Kim, Hyun-Su
    • Journal of Korean Association for Spatial Structures
    • /
    • v.23 no.3
    • /
    • pp.87-94
    • /
    • 2023
  • Chloride is one of the most common threats to reinforced concrete (RC) durability. Alkaline environment of concrete makes a passive layer on the surface of reinforcement bars that prevents the bar from corrosion. However, when the chloride concentration amount at the reinforcement bar reaches a certain level, deterioration of the passive protection layer occurs, causing corrosion and ultimately reducing the structure's safety and durability. Therefore, understanding the chloride diffusion and its prediction are important to evaluate the safety and durability of RC structure. In this study, the chloride diffusion coefficient is predicted by machine learning techniques. Various machine learning techniques such as multiple linear regression, decision tree, random forest, support vector machine, artificial neural networks, extreme gradient boosting annd k-nearest neighbor were used and accuracy of there models were compared. In order to evaluate the accuracy, root mean square error (RMSE), mean square error (MSE), mean absolute error (MAE) and coefficient of determination (R2) were used as prediction performance indices. The k-fold cross-validation procedure was used to estimate the performance of machine learning models when making predictions on data not used during training. Grid search was applied to hyperparameter optimization. It has been shown from numerical simulation that ensemble learning methods such as random forest and extreme gradient boosting successfully predicted the chloride diffusion coefficient and artificial neural networks also provided accurate result.

Forecasting of Iron Ore Prices using Machine Learning (머신러닝을 이용한 철광석 가격 예측에 대한 연구)

  • Lee, Woo Chang;Kim, Yang Sok;Kim, Jung Min;Lee, Choong Kwon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.2
    • /
    • pp.57-72
    • /
    • 2020
  • The price of iron ore has continued to fluctuate with high demand and supply from many countries and companies. In this business environment, forecasting the price of iron ore has become important. This study developed the machine learning model forecasting the price of iron ore a one month after the trading events. The forecasting model used distributed lag model and deep learning models such as MLP (Multi-layer perceptron), RNN (Recurrent neural network) and LSTM (Long short-term memory). According to the results of comparing individual models through metrics, LSTM showed the lowest predictive error. Also, as a result of comparing the models using the ensemble technique, the distributed lag and LSTM ensemble model showed the lowest prediction.

Explainable AI Application for Machine Predictive Maintenance (설명 가능한 AI를 적용한 기계 예지 정비 방법)

  • Cheon, Kang Min;Yang, Jaekyung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.4
    • /
    • pp.227-233
    • /
    • 2021
  • Predictive maintenance has been one of important applications of data science technology that creates a predictive model by collecting numerous data related to management targeted equipment. It does not predict equipment failure with just one or two signs, but quantifies and models numerous symptoms and historical data of actual failure. Statistical methods were used a lot in the past as this predictive maintenance method, but recently, many machine learning-based methods have been proposed. Such proposed machine learning-based methods are preferable in that they show more accurate prediction performance. However, with the exception of some learning models such as decision tree-based models, it is very difficult to explicitly know the structure of learning models (Black-Box Model) and to explain to what extent certain attributes (features or variables) of the learning model affected the prediction results. To overcome this problem, a recently proposed study is an explainable artificial intelligence (AI). It is a methodology that makes it easy for users to understand and trust the results of machine learning-based learning models. In this paper, we propose an explainable AI method to further enhance the explanatory power of the existing learning model by targeting the previously proposedpredictive model [5] that learned data from a core facility (Hyper Compressor) of a domestic chemical plant that produces polyethylene. The ensemble prediction model, which is a black box model, wasconverted to a white box model using the Explainable AI. The proposed methodology explains the direction of control for the major features in the failure prediction results through the Explainable AI. Through this methodology, it is possible to flexibly replace the timing of maintenance of the machine and supply and demand of parts, and to improve the efficiency of the facility operation through proper pre-control.

Feature selection and prediction modeling of drug responsiveness in Pharmacogenomics (약물유전체학에서 약물반응 예측모형과 변수선택 방법)

  • Kim, Kyuhwan;Kim, Wonkuk
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.2
    • /
    • pp.153-166
    • /
    • 2021
  • A main goal of pharmacogenomics studies is to predict individual's drug responsiveness based on high dimensional genetic variables. Due to a large number of variables, feature selection is required in order to reduce the number of variables. The selected features are used to construct a predictive model using machine learning algorithms. In the present study, we applied several hybrid feature selection methods such as combinations of logistic regression, ReliefF, TurF, random forest, and LASSO to a next generation sequencing data set of 400 epilepsy patients. We then applied the selected features to machine learning methods including random forest, gradient boosting, and support vector machine as well as a stacking ensemble method. Our results showed that the stacking model with a hybrid feature selection of random forest and ReliefF performs better than with other combinations of approaches. Based on a 5-fold cross validation partition, the mean test accuracy value of the best model was 0.727 and the mean test AUC value of the best model was 0.761. It also appeared that the stacking models outperform than single machine learning predictive models when using the same selected features.

Comparing the Performance of 17 Machine Learning Models in Predicting Human Population Growth of Countries

  • Otoom, Mohammad Mahmood
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.220-225
    • /
    • 2021
  • Human population growth rate is an important parameter for real-world planning. Common approaches rely upon fixed parameters like human population, mortality rate, fertility rate, which is collected historically to determine the region's population growth rate. Literature does not provide a solution for areas with no historical knowledge. In such areas, machine learning can solve the problem, but a multitude of machine learning algorithm makes it difficult to determine the best approach. Further, the missing feature is a common real-world problem. Thus, it is essential to compare and select the machine learning techniques which provide the best and most robust in the presence of missing features. This study compares 17 machine learning techniques (base learners and ensemble learners) performance in predicting the human population growth rate of the country. Among the 17 machine learning techniques, random forest outperformed all the other techniques both in predictive performance and robustness towards missing features. Thus, the study successfully demonstrates and compares machine learning techniques to predict the human population growth rate in settings where historical data and feature information is not available. Further, the study provides the best machine learning algorithm for performing population growth rate prediction.

Development of benthic macroinvertebrate species distribution models using the Bayesian optimization (베이지안 최적화를 통한 저서성 대형무척추동물 종분포모델 개발)

  • Go, ByeongGeon;Shin, Jihoon;Cha, Yoonkyung
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.4
    • /
    • pp.259-275
    • /
    • 2021
  • This study explored the usefulness and implications of the Bayesian hyperparameter optimization in developing species distribution models (SDMs). A variety of machine learning (ML) algorithms, namely, support vector machine (SVM), random forest (RF), boosted regression tree (BRT), XGBoost (XGB), and Multilayer perceptron (MLP) were used for predicting the occurrence of four benthic macroinvertebrate species. The Bayesian optimization method successfully tuned model hyperparameters, with all ML models resulting an area under the curve (AUC) > 0.7. Also, hyperparameter search ranges that generally clustered around the optimal values suggest the efficiency of the Bayesian optimization in finding optimal sets of hyperparameters. Tree based ensemble algorithms (BRT, RF, and XGB) tended to show higher performances than SVM and MLP. Important hyperparameters and optimal values differed by species and ML model, indicating the necessity of hyperparameter tuning for improving individual model performances. The optimization results demonstrate that for all macroinvertebrate species SVM and RF required fewer numbers of trials until obtaining optimal hyperparameter sets, leading to reduced computational cost compared to other ML algorithms. The results of this study suggest that the Bayesian optimization is an efficient method for hyperparameter optimization of machine learning algorithms.

Identifying the Optimal Machine Learning Algorithm for Breast Cancer Prediction

  • ByungJoo Kim
    • International journal of advanced smart convergence
    • /
    • v.13 no.3
    • /
    • pp.80-88
    • /
    • 2024
  • Breast cancer remains a significant global health burden, necessitating accurate and timely detection for improved patient outcomes. Machine learning techniques have demonstrated remarkable potential in assisting breast cancer diagnosis by learning complex patterns from multi-modal patient data. This study comprehensively evaluates several popular machine learning models, including logistic regression, decision trees, random forests, support vector machines (SVMs), naive Bayes, k-nearest neighbors (KNN), XGBoost, and ensemble methods for breast cancer prediction using the Wisconsin Breast Cancer Dataset (WBCD). Through rigorous benchmarking across metrics like accuracy, precision, recall, F1-score, and area under the ROC curve (AUC), we identify the naive Bayes classifier as the top-performing model, achieving an accuracy of 0.974, F1-score of 0.979, and highest AUC of 0.988. Other strong performers include logistic regression, random forests, and XGBoost, with AUC values exceeding 0.95. Our findings showcase the significant potential of machine learning, particularly the robust naive Bayes algorithm, to provide highly accurate and reliable breast cancer screening from fine needle aspirate (FNA) samples, ultimately enabling earlier intervention and optimized treatment strategies.