• Title/Summary/Keyword: XGboost

Search Result 238, Processing Time 0.023 seconds

Error Characteristic Analysis and Correction Technique Study for One-month Temperature Forecast Data (1개월 기온 예측자료의 오차 특성 분석 및 보정 기법 연구)

  • Yongseok Kim;Jina Hur;Eung-Sup Kim;Kyo-Moon Shim;Sera Jo;Min-Gu Kang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.368-375
    • /
    • 2023
  • In this study, we examined the error characteristic and bias correction method for one-month temperature forecast data produced through joint development between the Rural Development Administration and the H ong Kong University of Science and Technology. For this purpose, hindcast data from 2013 to 2021, weather observation data, and various environmental information were collected and error characteristics under various environmental conditions were analyzed. In the case of maximum and minimum temperatures, the higher the elevation and latitude, the larger the forecast error. On average, the RMSE of the forecast data corrected by the linear regression model and the XGBoost decreased by 0.203, 0.438 (maximum temperature) and 0.069, 0.390 (minimum temperature), respectively, compared to the uncorrected forecast data. Overall, XGBoost showed better error improvement than the linear regression model. Through this study, it was found that errors in prediction data are affected by topographical conditions, and that machine learning methods such as XGBoost can effectively improve errors by considering various environmental factors.

Quality Prediction Model for Manufacturing Process of Free-Machining 303-series Stainless Steel Small Rolling Wire Rods (쾌삭 303계 스테인리스강 소형 압연 선재 제조 공정의 생산품질 예측 모형)

  • Seo, Seokjun;Kim, Heungseob
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.4
    • /
    • pp.12-22
    • /
    • 2021
  • This article suggests the machine learning model, i.e., classifier, for predicting the production quality of free-machining 303-series stainless steel(STS303) small rolling wire rods according to the operating condition of the manufacturing process. For the development of the classifier, manufacturing data for 37 operating variables were collected from the manufacturing execution system(MES) of Company S, and the 12 types of derived variables were generated based on literature review and interviews with field experts. This research was performed with data preprocessing, exploratory data analysis, feature selection, machine learning modeling, and the evaluation of alternative models. In the preprocessing stage, missing values and outliers are removed, and oversampling using SMOTE(Synthetic oversampling technique) to resolve data imbalance. Features are selected by variable importance of LASSO(Least absolute shrinkage and selection operator) regression, extreme gradient boosting(XGBoost), and random forest models. Finally, logistic regression, support vector machine(SVM), random forest, and XGBoost are developed as a classifier to predict the adequate or defective products with new operating conditions. The optimal hyper-parameters for each model are investigated by the grid search and random search methods based on k-fold cross-validation. As a result of the experiment, XGBoost showed relatively high predictive performance compared to other models with an accuracy of 0.9929, specificity of 0.9372, F1-score of 0.9963, and logarithmic loss of 0.0209. The classifier developed in this study is expected to improve productivity by enabling effective management of the manufacturing process for the STS303 small rolling wire rods.

A Comparative Study on the Methodology of Failure Detection of Reefer Containers Using PCA and Feature Importance (PCA 및 변수 중요도를 활용한 냉동컨테이너 고장 탐지 방법론 비교 연구)

  • Lee, Seunghyun;Park, Sungho;Lee, Seungjae;Lee, Huiwon;Yu, Sungyeol;Lee, Kangbae
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.3
    • /
    • pp.23-31
    • /
    • 2022
  • This study analyzed the actual frozen container operation data of Starcool provided by H Shipping. Through interviews with H's field experts, only Critical and Fatal Alarms among the four failure alarms were defined as failures, and it was confirmed that using all variables due to the nature of frozen containers resulted in cost inefficiency. Therefore, this study proposes a method for detecting failure of frozen containers through characteristic importance and PCA techniques. To improve the performance of the model, we select variables based on feature importance through tree series models such as XGBoost and LGBoost, and use PCA to reduce the dimension of the entire variables for each model. The boosting-based XGBoost and LGBoost techniques showed that the results of the model proposed in this study improved the reproduction rate by 0.36 and 0.39 respectively compared to the results of supervised learning using all 62 variables.

Darknet Traffic Detection and Classification Using Gradient Boosting Techniques (Gradient Boosting 기법을 활용한 다크넷 트래픽 탐지 및 분류)

  • Kim, Jihye;Lee, Soo Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.2
    • /
    • pp.371-379
    • /
    • 2022
  • Darknet is based on the characteristics of anonymity and security, and this leads darknet to be continuously abused for various crimes and illegal activities. Therefore, it is very important to detect and classify darknet traffic to prevent the misuse and abuse of darknet. This work proposes a novel approach, which uses the Gradient Boosting techniques for darknet traffic detection and classification. XGBoost and LightGBM algorithm achieve detection accuracy of 99.99%, and classification accuracy of over 99%, which could get more than 3% higher detection accuracy and over 13% higher classification accuracy, compared to the previous research. In particular, LightGBM algorithm could detect and classify darknet traffic in a way that is superior to XGBoost by reducing the learning time by about 1.6 times and hyperparameter tuning time by more than 10 times.

A LightGBM and XGBoost Learning Method for Postoperative Critical Illness Key Indicators Analysis

  • Lei Han;Yiziting Zhu;Yuwen Chen;Guoqiong Huang;Bin Yi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2016-2029
    • /
    • 2023
  • Accurate prediction of critical illness is significant for ensuring the lives and health of patients. The selection of indicators affects the real-time capability and accuracy of the prediction for critical illness. However, the diversity and complexity of these indicators make it difficult to find potential connections between them and critical illnesses. For the first time, this study proposes an indicator analysis model to extract key indicators from the preoperative and intraoperative clinical indicators and laboratory results of critical illnesses. In this study, preoperative and intraoperative data of heart failure and respiratory failure are used to verify the model. The proposed model processes the datum and extracts key indicators through four parts. To test the effectiveness of the proposed model, the key indicators are used to predict the two critical illnesses. The classifiers used in the prediction are light gradient boosting machine (LightGBM) and eXtreme Gradient Boosting (XGBoost). The predictive performance using key indicators is better than that using all indicators. In the prediction of heart failure, LightGBM and XGBoost have sensitivities of 0.889 and 0.892, and specificities of 0.939 and 0.937, respectively. For respiratory failure, LightGBM and XGBoost have sensitivities of 0.709 and 0.689, and specificity of 0.936 and 0.940, respectively. The proposed model can effectively analyze the correlation between indicators and postoperative critical illness. The analytical results make it possible to find the key indicators for postoperative critical illnesses. This model is meaningful to assist doctors in extracting key indicators in time and improving the reliability and efficiency of prediction.

A Study on the Analysis of Factors for the Golden Glove Award by using Machine Learning (머신러닝을 이용한 골든글러브 수상 요인 분석에 대한 연구)

  • Uem, Daeyeob;Kim, Seongyong
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.5
    • /
    • pp.48-56
    • /
    • 2022
  • The importance of data analysis in baseball has been increasing after the success of MLB's Oakland which applied Billy Beane's money ball theory, and the 2020 KBO winner NC Dinos. Various studies using data in baseball has been conducted not only in the United States but also in Korea, In particular, the models using deep learning and machine learning has been suggested. However, in the previous studies using deep learning and machine learning, the focus is only on predicting the win or loss of the game, and there is a limitation in that it is difficult to interpret the results of which factors have an important influence on the game. In this paper, to investigate which factors is important by position, the prediction model for the Golden Glove award which is given for the best player by position is developed. To develop the prediction model, XGBoost which is one of boosting method is used, which also provide the feature importance which can be used to interpret the factors for prediction results. From the analysis, the important factors by position are identified.

A Study on the Prediction Model for Analysis of Water Quality in Gwangju Stream using Machine Learning Algorithm (머신러닝 학습 알고리즘을 이용한 광주천 수질 분석에 대한 예측 모델 연구)

  • Yu-Jeong Jeong;Jung-Jae Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.3
    • /
    • pp.531-538
    • /
    • 2024
  • While the importance of the water quality environment is being emphasized, the water quality index for improving the water quality of urban rivers in Gwangju Metropolitan City is an important factor affecting the aquatic ecosystem and requires accurate prediction. In this paper, the XGBoost and LightGBM machine learning algorithms were used to compare the performance of the water quality inspection items of the downstream Pyeongchon Bridge and upstream BanghakBr_Gwangjucheon1 water systems, which are important points of Gwangju Stream, as a result of statistical verification, three water quality indicators, Nitrogen(TN), Nitrate(NO3), and Ammonia amount(NH3) were predicted, and the performance of the predictive model was evaluated by using RMSE, a regression model evaluation index. As a result of comparing the performance after cross-validation by implementing individual models for each water system, the XGBoost model showed excellent predictive ability.

Development of a Machine Learning Model for Imputing Time Series Data with Massive Missing Values (결측치 비율이 높은 시계열 데이터 분석 및 예측을 위한 머신러닝 모델 구축)

  • Bangwon Ko;Yong Hee Han
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.3
    • /
    • pp.176-182
    • /
    • 2024
  • In this study, we compared and analyzed various methods of missing data handling to build a machine learning model that can effectively analyze and predict time series data with a high percentage of missing values. For this purpose, Predictive State Model Filtering (PSMF), MissForest, and Imputation By Feature Importance (IBFI) methods were applied, and their prediction performance was evaluated using LightGBM, XGBoost, and Explainable Boosting Machines (EBM) machine learning models. The results of the study showed that MissForest and IBFI performed the best among the methods for handling missing values, reflecting the nonlinear data patterns, and that XGBoost and EBM models performed better than LightGBM. This study emphasizes the importance of combining nonlinear imputation methods and machine learning models in the analysis and prediction of time series data with a high percentage of missing values, and provides a practical methodology.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

Predictive of Osteoporosis by Tree-based Machine Learning Model in Post-menopause Woman (폐경 여성에서 트리기반 머신러닝 모델로부터 골다공증 예측)

  • Lee, In-Ja;Lee, Junho
    • Journal of radiological science and technology
    • /
    • v.43 no.6
    • /
    • pp.495-502
    • /
    • 2020
  • In this study, the prevalence of osteoporosis was predicted based on 10 independent variables such as age, weight, and alcohol consumption and 4 tree-based machine-learning models, and the performance of each model was compared. Also the model with the highest performance was used to check the performance by clearing the independent variable, and Area Under Curve(ACU) was utilized to evaluate the performance of the model. The ACU for each model was Decision tree 0.663, Random forest 0.704, GBM 0.702, and XGBoost 0.710 and the importance of the variable was shown in the order of age, weight, and family history. As a result of using XGBoost, the highest performance model and clearing independent variables, the ACU shows the best performance of 0.750 with 7 independent variables. This data suggests that this method be applied to predict osteoporosis, but also other various diseases. In addition, it is expected to be used as basic data for big data research in the health care field.