• Title/Summary/Keyword: Error boosting

Search Result 72, Processing Time 0.026 seconds

Boosted Regression Method based on Rejection Limits for Large-Scale Data (대량 데이터를 위한 제한거절 기반의 회귀부스팅 기법)

  • Kwon, Hyuk-Ho;Kim, Seung-Wook;Choi, Dong-Hoon;Lee, Kichun
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.42 no.4
    • /
    • pp.263-269
    • /
    • 2016
  • The purpose of this study is to challenge a computational regression-type problem, that is handling large-size data, in which conventional metamodeling techniques often fail in a practical sense. To solve such problems, regression-type boosting, one of ensemble model techniques, together with bootstrapping-based re-sampling is a reasonable choice. This study suggests weight updates by the amount of the residual itself and a new error decision criterion which constructs an ensemble model of models selectively chosen by rejection limits. Through these ideas, we propose AdaBoost.RMU.R as a metamodeling technique suitable for handling large-size data. To assess the performance of the proposed method in comparison to some existing methods, we used 6 mathematical problems. For each problem, we computed the average and the standard deviation of residuals between real response values and predicted response values. Results revealed that the average and the standard deviation of AdaBoost.RMU.R were improved than those of other algorithms.

Data-Driven Modelling of Damage Prediction of Granite Using Acoustic Emission Parameters in Nuclear Waste Repository

  • Lee, Hang-Lo;Kim, Jin-Seop;Hong, Chang-Ho;Jeong, Ho-Young;Cho, Dong-Keun
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.19 no.1
    • /
    • pp.75-85
    • /
    • 2021
  • Evaluating the quantitative damage to rocks through acoustic emission (AE) has become a research focus. Most studies mainly used one or two AE parameters to evaluate the degree of damage, but several AE parameters have been rarely used. In this study, several data-driven models were employed to reflect the combined features of AE parameters. Through uniaxial compression tests, we obtained mechanical and AE-signal data for five granite specimens. The maximum amplitude, hits, counts, rise time, absolute energy, and initiation frequency expressed as the cumulative value were selected as input parameters. The result showed that gradient boosting (GB) was the best model among the support vector regression methods. When GB was applied to the testing data, the root-mean-square error and R between the predicted and actual values were 0.96 and 0.077, respectively. A parameter analysis was performed to capture the parameter significance. The result showed that cumulative absolute energy was the main parameter for damage prediction. Thus, AE has practical applicability in predicting rock damage without conducting mechanical tests. Based on the results, this study will be useful for monitoring the near-field rock mass of nuclear waste repository.

Improved Feature Selection Techniques for Image Retrieval based on Metaheuristic Optimization

  • Johari, Punit Kumar;Gupta, Rajendra Kumar
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.40-48
    • /
    • 2021
  • Content-Based Image Retrieval (CBIR) system plays a vital role to retrieve the relevant images as per the user perception from the huge database is a challenging task. Images are represented is to employ a combination of low-level features as per their visual content to form a feature vector. To reduce the search time of a large database while retrieving images, a novel image retrieval technique based on feature dimensionality reduction is being proposed with the exploit of metaheuristic optimization techniques based on Genetic Algorithm (GA), Extended Binary Cuckoo Search (EBCS) and Whale Optimization Algorithm (WOA). Each image in the database is indexed using a feature vector comprising of fuzzified based color histogram descriptor for color and Median binary pattern were derived in the color space from HSI for texture feature variants respectively. Finally, results are being compared in terms of Precision, Recall, F-measure, Accuracy, and error rate with benchmark classification algorithms (Linear discriminant analysis, CatBoost, Extra Trees, Random Forest, Naive Bayes, light gradient boosting, Extreme gradient boosting, k-NN, and Ridge) to validate the efficiency of the proposed approach. Finally, a ranking of the techniques using TOPSIS has been considered choosing the best feature selection technique based on different model parameters.

Prediction of Cryogenic- and Room-Temperature Deformation Behavior of Rolled Titanium using Machine Learning (타이타늄 압연재의 기계학습 기반 극저온/상온 변형거동 예측)

  • S. Cheon;J. Yu;S.H. Lee;M.-S. Lee;T.-S. Jun;T. Lee
    • Transactions of Materials Processing
    • /
    • v.32 no.2
    • /
    • pp.74-80
    • /
    • 2023
  • A deformation behavior of commercially pure titanium (CP-Ti) is highly dependent on material and processing parameters, such as deformation temperature, deformation direction, and strain rate. This study aims to predict the multivariable and nonlinear tensile behavior of CP-Ti using machine learning based on three algorithms: artificial neural network (ANN), light gradient boosting machine (LGBM), and long short-term memory (LSTM). The predictivity for tensile behaviors at the cryogenic temperature was lower than those in the room temperature due to the larger data scattering in the train dataset used in the machine learning. Although LGBM showed the lowest value of root mean squared error, it was not the best strategy owing to the overfitting and step-function morphology different from the actual data. LSTM performed the best as it effectively learned the continuous characteristics of a flow curve as well as it spent the reduced time for machine learning, even without sufficient database and hyperparameter tuning.

Assessment of maximum liquefaction distance using soft computing approaches

  • Kishan Kumar;Pijush Samui;Shiva S. Choudhary
    • Geomechanics and Engineering
    • /
    • v.37 no.4
    • /
    • pp.395-418
    • /
    • 2024
  • The epicentral region of earthquakes is typically where liquefaction-related damage takes place. To determine the maximum distance, such as maximum epicentral distance (Re), maximum fault distance (Rf), or maximum hypocentral distance (Rh), at which an earthquake can inflict damage, given its magnitude, this study, using a recently updated global liquefaction database, multiple ML models are built to predict the limiting distances (Re, Rf, or Rh) required for an earthquake of a given magnitude to cause damage. Four machine learning models LSTM (Long Short-Term Memory), BiLSTM (Bidirectional Long Short-Term Memory), CNN (Convolutional Neural Network), and XGB (Extreme Gradient Boosting) are developed using the Python programming language. All four proposed ML models performed better than empirical models for limiting distance assessment. Among these models, the XGB model outperformed all the models. In order to determine how well the suggested models can predict limiting distances, a number of statistical parameters have been studied. To compare the accuracy of the proposed models, rank analysis, error matrix, and Taylor diagram have been developed. The ML models proposed in this paper are more robust than other current models and may be used to assess the minimal energy of a liquefaction disaster caused by an earthquake or to estimate the maximum distance of a liquefied site provided an earthquake in rapid disaster mapping.

Comparison of data mining methods with daily lens data (데일리 렌즈 데이터를 사용한 데이터마이닝 기법 비교)

  • Seok, Kyungha;Lee, Taewoo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.6
    • /
    • pp.1341-1348
    • /
    • 2013
  • To solve the classification problems, various data mining techniques have been applied to database marketing, credit scoring and market forecasting. In this paper, we compare various techniques such as bagging, boosting, LASSO, random forest and support vector machine with the daily lens transaction data. The classical techniques-decision tree, logistic regression-are used too. The experiment shows that the random forest has a little smaller misclassification rate and standard error than those of other methods. The performance of the SVM is good in the sense of misclassfication rate and bad in the sense of standard error. Taking the model interpretation and computing time into consideration, we conclude that the LASSO gives the best result.

Research on Insurance Claim Prediction Using Ensemble Learning-Based Dynamic Weighted Allocation Model (앙상블 러닝 기반 동적 가중치 할당 모델을 통한 보험금 예측 인공지능 연구)

  • Jong-Seok Choi
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.221-228
    • /
    • 2024
  • Predicting insurance claims is a key task for insurance companies to manage risks and maintain financial stability. Accurate insurance claim predictions enable insurers to set appropriate premiums, reduce unexpected losses, and improve the quality of customer service. This study aims to enhance the performance of insurance claim prediction models by applying ensemble learning techniques. The predictive performance of models such as Random Forest, Gradient Boosting Machine (GBM), XGBoost, Stacking, and the proposed Dynamic Weighted Ensemble (DWE) model were compared and analyzed. Model performance was evaluated using Mean Absolute Error (MAE), Mean Squared Error (MSE), and the Coefficient of Determination (R2). Experimental results showed that the DWE model outperformed others in terms of evaluation metrics, achieving optimal predictive performance by combining the prediction results of Random Forest, XGBoost, LR, and LightGBM. This study demonstrates that ensemble learning techniques are effective in improving the accuracy of insurance claim predictions and suggests the potential utilization of AI-based predictive models in the insurance industry.

A study on the performance prediction technique of the dual-thrust rocket motor (이중 추력형 로켓모타의 성능예측 기법 연구)

  • 이도형
    • Journal of the Korean Society of Propulsion Engineers
    • /
    • v.5 no.2
    • /
    • pp.38-43
    • /
    • 2001
  • In this study, the technique of the performance prediction on the finocyl-type dual-thrust rocket motor is developed, and the predicted data are compared with those of the static firing tests. The prediction is carried out with the separate calculations of the grain burning area and the performance of the rocket motor. When predicting the performance of the dual-thrust rocket motor, the different correction factors should be used at the boosting and sustaining phases. Otherwise, an error of prediction will follow. Reprediction using the separate correction factors shows good agreement with the test data within 0.5% error.

  • PDF

Effect of input variable characteristics on the performance of an ensemble machine learning model for algal bloom prediction (앙상블 머신러닝 모형을 이용한 하천 녹조발생 예측모형의 입력변수 특성에 따른 성능 영향)

  • Kang, Byeong-Koo;Park, Jungsu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.6
    • /
    • pp.417-424
    • /
    • 2021
  • Algal bloom is an ongoing issue in the management of freshwater systems for drinking water supply, and the chlorophyll-a concentration is commonly used to represent the status of algal bloom. Thus, the prediction of chlorophyll-a concentration is essential for the proper management of water quality. However, the chlorophyll-a concentration is affected by various water quality and environmental factors, so the prediction of its concentration is not an easy task. In recent years, many advanced machine learning algorithms have increasingly been used for the development of surrogate models to prediction the chlorophyll-a concentration in freshwater systems such as rivers or reservoirs. This study used a light gradient boosting machine(LightGBM), a gradient boosting decision tree algorithm, to develop an ensemble machine learning model to predict chlorophyll-a concentration. The field water quality data observed at Daecheong Lake, obtained from the real-time water information system in Korea, were used for the development of the model. The data include temperature, pH, electric conductivity, dissolved oxygen, total organic carbon, total nitrogen, total phosphorus, and chlorophyll-a. First, a LightGBM model was developed to predict the chlorophyll-a concentration by using the other seven items as independent input variables. Second, the time-lagged values of all the input variables were added as input variables to understand the effect of time lag of input variables on model performance. The time lag (i) ranges from 1 to 50 days. The model performance was evaluated using three indices, root mean squared error-observation standard deviation ration (RSR), Nash-Sutcliffe coefficient of efficiency (NSE) and mean absolute error (MAE). The model showed the best performance by adding a dataset with a one-day time lag (i=1) where RSR, NSE, and MAE were 0.359, 0.871 and 1.510, respectively. The improvement of model performance was observed when a dataset with a time lag up of about 15 days (i=15) was added.

Comparison of Handball Result Predictions Using Bagging and Boosting Algorithms (배깅과 부스팅 알고리즘을 이용한 핸드볼 결과 예측 비교)

  • Kim, Ji-eung;Park, Jong-chul;Kim, Tae-gyu;Lee, Hee-hwa;Ahn, Jee-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.8
    • /
    • pp.279-286
    • /
    • 2021
  • The purpose of this study is to compare the predictive power of the Bagging and Boosting algorithm of ensemble method based on the motion information that occurs in woman handball matches and to analyze the availability of motion information. To this end, this study analyzed the predictive power of the result of 15 practice matches based on inertial motion by analyzing the predictive power of Random Forest and Adaboost algorithms. The results of the study are as follows. First, the prediction rate of the Random Forest algorithm was 66.9 ± 0.1%, and the prediction rate of the Adaboost algorithm was 65.6 ± 1.6%. Second, Random Forest predicted all of the winning results, but none of the losing results. On the other hand, the Adaboost algorithm shows 91.4% prediction of winning and 10.4% prediction of losing. Third, in the verification of the suitability of the algorithm, the Random Forest had no overfitting error, but Adaboost showed an overfitting error. Based on the results of this study, the availability of motion information is high when predicting sports events, and it was confirmed that the Random Forest algorithm was superior to the Adaboost algorithm.