• Title/Summary/Keyword: XGboost

Search Result 238, Processing Time 0.02 seconds

Research of PPI prediction model based on POST-TAVR ECG (POST-TAVR ECG 기반의 PPI 예측 모델 연구)

  • InSeo Song;SeMo Yang;KangYoon Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.29-38
    • /
    • 2024
  • After Transcatheter Aortic Valve Replacement (TAVR), comprehensive management of complications, including the need for Permanent Pacemaker Implantation (PPI), is crucial, increasing the demand for accurate prediction models. Departing from traditional image-based methods, this study developed an optimal PPI prediction model based on ECG data using the XGBoost algorithm. Focusing on ECG signals like DeltaPR and DeltaQRS as key indicators, the model effectively identifies the correlation between conduction disorders and PPI needs, achieving superior performance with an AUC of 0.91. Validated using data from two hospitals, it demonstrated a high similarity rate of 95.28% in predicting PPI from ECG characteristics. This confirms the model's effective applicability across diverse hospital data, establishing a significant advancement in the development of reliable and practical PPI prediction models with reduced dependence on human intervention and costly medical imaging.

Boot storm Reduction through Artificial Intelligence Driven System in Virtual Desktop Infrastructure

  • Heejin Lee;Taeyoung Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.7
    • /
    • pp.1-9
    • /
    • 2024
  • In this paper, we propose BRAIDS, a boot storm mitigation plan consisting of an AI-based VDI usage prediction system and a virtual machine boot scheduler system, to alleviate boot storms and improve service stability. Virtual Desktop Infrastructure (VDI) is an important technology for improving an organization's work productivity and increasing IT infrastructure efficiency. Boot storms that occur when multiple virtual desktops boot simultaneously cause poor performance and increased latency. Using the xgboost algorithm, existing VDI usage data is used to predict future VDI usage. In addition, it receives the predicted usage as input, defines a boot storm considering the hardware specifications of the VDI server and virtual machine, and provides a schedule to sequentially boot virtual machines to alleviate boot storms. Through the case study, the VDI usage prediction model showed high prediction accuracy and performance improvement, and it was confirmed that the boot storm phenomenon in the virtual desktop environment can be alleviated and IT infrastructure can be utilized efficiently through the virtual machine boot scheduler.

Horse race rank prediction using learning-to-rank approaches (Learning-to-rank 기법을 활용한 서울 경마경기 순위 예측)

  • Junhyoung Chung;Donguk Shin;Seyong Hwang;Gunwoong Park
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.2
    • /
    • pp.239-253
    • /
    • 2024
  • This research applies both point-wise and pair-wise learning strategies within the learning-to-rank (LTR) framework to predict horse race rankings in Seoul. Specifically, for point-wise learning, we employ a linear model and random forest. In contrast, for pair-wise learning, we utilize tools such as RankNet, and LambdaMART (XGBoost Ranker, LightGBM Ranker, and CatBoost Ranker). Furthermore, to enhance predictions, race records are standardized based on race distance, and we integrate various datasets, including race information, jockey information, horse training records, and trainer information. Our results empirically demonstrate that pair-wise learning approaches that can reflect the order information between items generally outperform point-wise learning approaches. Notably, CatBoost Ranker is the top performer. Through Shapley value analysis, we identified that the important variables for CatBoost Ranker include the performance of a horse, its previous race records, the count of its starting trainings, the total number of starting trainings, and the instances of disease diagnoses for the horse.

Improved prediction of soil liquefaction susceptibility using ensemble learning algorithms

  • Satyam Tiwari;Sarat K. Das;Madhumita Mohanty;Prakhar
    • Geomechanics and Engineering
    • /
    • v.37 no.5
    • /
    • pp.475-498
    • /
    • 2024
  • The prediction of the susceptibility of soil to liquefaction using a limited set of parameters, particularly when dealing with highly unbalanced databases is a challenging problem. The current study focuses on different ensemble learning classification algorithms using highly unbalanced databases of results from in-situ tests; standard penetration test (SPT), shear wave velocity (Vs) test, and cone penetration test (CPT). The input parameters for these datasets consist of earthquake intensity parameters, strong ground motion parameters, and in-situ soil testing parameters. liquefaction index serving as the binary output parameter. After a rigorous comparison with existing literature, extreme gradient boosting (XGBoost), bagging, and random forest (RF) emerge as the most efficient models for liquefaction instance classification across different datasets. Notably, for SPT and Vs-based models, XGBoost exhibits superior performance, followed by Light gradient boosting machine (LightGBM) and Bagging, while for CPT-based models, Bagging ranks highest, followed by Gradient boosting and random forest, with CPT-based models demonstrating lower Gmean(error), rendering them preferable for soil liquefaction susceptibility prediction. Key parameters influencing model performance include internal friction angle of soil (ϕ) and percentage of fines less than 75 µ (F75) for SPT and Vs data and normalized average cone tip resistance (qc) and peak horizontal ground acceleration (amax) for CPT data. It was also observed that the addition of Vs measurement to SPT data increased the efficiency of the prediction in comparison to only SPT data. Furthermore, to enhance usability, a graphical user interface (GUI) for seamless classification operations based on provided input parameters was proposed.

Prediction of Disk Cutter Wear Considering Ground Conditions and TBM Operation Parameters (지반 조건과 TBM 운영 파라미터를 고려한 디스크 커터 마모 예측)

  • Yunseong Kang;Tae Young Ko
    • Tunnel and Underground Space
    • /
    • v.34 no.2
    • /
    • pp.143-153
    • /
    • 2024
  • Tunnel Boring Machine (TBM) method is a tunnel excavation method that produces lower levels of noise and vibration during excavation compared to drilling and blasting methods, and it offers higher stability. It is increasingly being applied to tunnel projects worldwide. The disc cutter is an excavation tool mounted on the cutterhead of a TBM, which constantly interacts with the ground at the tunnel face, inevitably leading to wear. In this study quantitatively predicted disc cutter wear using geological conditions, TBM operational parameters, and machine learning algorithms. Among the input variables for predicting disc cutter wear, the Uniaxial Compressive Strength (UCS) is considerably limited compared to machine and wear data, so the UCS estimation for the entire section was first conducted using TBM machine data, and then the prediction of the Coefficient of Wearing rate(CW) was performed with the completed data. Comparing the performance of CW prediction models, the XGBoost model showed the highest performance, and SHapley Additive exPlanation (SHAP) analysis was conducted to interpret the complex prediction model.

Methodology for Variable Optimization in Injection Molding Process (사출 성형 공정에서의 변수 최적화 방법론)

  • Jung, Young Jin;Kang, Tae Ho;Park, Jeong In;Cho, Joong Yeon;Hong, Ji Soo;Kang, Sung Woo
    • Journal of Korean Society for Quality Management
    • /
    • v.52 no.1
    • /
    • pp.43-56
    • /
    • 2024
  • Purpose: The injection molding process, crucial for plastic shaping, encounters difficulties in sustaining product quality when replacing injection machines. Variations in machine types and outputs between different production lines or factories increase the risk of quality deterioration. In response, the study aims to develop a system that optimally adjusts conditions during the replacement of injection machines linked to molds. Methods: Utilizing a dataset of 12 injection process variables and 52 corresponding sensor variables, a predictive model is crafted using Decision Tree, Random Forest, and XGBoost. Model evaluation is conducted using an 80% training data and a 20% test data split. The dependent variable, classified into five characteristics based on temperature and pressure, guides the prediction model. Bayesian optimization, integrated into the selected model, determines optimal values for process variables during the replacement of injection machines. The iterative convergence of sensor prediction values to the optimum range is visually confirmed, aligning them with the target range. Experimental results validate the proposed approach. Results: Post-experiment analysis indicates the superiority of the XGBoost model across all five characteristics, achieving a combined high performance of 0.81 and a Mean Absolute Error (MAE) of 0.77. The study introduces a method for optimizing initial conditions in the injection process during machine replacement, utilizing Bayesian optimization. This streamlined approach reduces both time and costs, thereby enhancing process efficiency. Conclusion: This research contributes practical insights to the optimization literature, offering valuable guidance for industries seeking streamlined and cost-effective methods for machine replacement in injection molding.

Machine Learning Prediction for the Recurrence After Electrical Cardioversion of Patients With Persistent Atrial Fibrillation

  • Soonil Kwon;Eunjung Lee;Hojin Ju;Hyo-Jeong Ahn;So-Ryoung Lee;Eue-Keun Choi;Jangwon Suh;Seil Oh;Wonjong Rhee
    • Korean Circulation Journal
    • /
    • v.53 no.10
    • /
    • pp.677-689
    • /
    • 2023
  • Background and Objectives: There is limited evidence regarding machine-learning prediction for the recurrence of atrial fibrillation (AF) after electrical cardioversion (ECV). This study aimed to predict the recurrence of AF after ECV using machine learning of clinical features and electrocardiograms (ECGs) in persistent AF patients. Methods: We analyzed patients who underwent successful ECV for persistent AF. Machine learning was designed to predict patients with 1-month recurrence. Individual 12-lead ECGs were collected before and after ECV. Various clinical features were collected and trained the extreme gradient boost (XGBoost)-based model. Ten-fold cross-validation was used to evaluate the performance of the model. The performance was compared to the C-statistics of the selected clinical features. Results: Among 718 patients (mean age 63.5±9.3 years, men 78.8%), AF recurred in 435 (60.6%) patients after 1 month. With the XGBoost-based model, the areas under the receiver operating characteristic curves (AUROCs) were 0.57, 0.60, and 0.63 if the model was trained by clinical features, ECGs, and both (the final model), respectively. For the final model, the sensitivity, specificity, and F1-score were 84.7%, 28.2%, and 0.73, respectively. Although the AF duration showed the best predictive performance (AUROC, 0.58) among the clinical features, it was significantly lower than that of the final machine-learning model (p<0.001). Additional training of extended monitoring data of 15-minute single-lead ECG and photoplethysmography in available patients (n=261) did not significantly improve the model's performance. Conclusions: Machine learning showed modest performance in predicting AF recurrence after ECV in persistent AF patients, warranting further validation studies.

A fundamental study on the automation of tunnel blasting design using a machine learning model (머신러닝을 이용한 터널발파설계 자동화를 위한 기초연구)

  • Kim, Yangkyun;Lee, Je-Kyum;Lee, Sean Seungwon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.5
    • /
    • pp.431-449
    • /
    • 2022
  • As many tunnels generally have been constructed, various experiences and techniques have been accumulated for tunnel design as well as tunnel construction. Hence, there are not a few cases that, for some usual tunnel design works, it is sufficient to perform the design by only modifying or supplementing previous similar design cases unless a tunnel has a unique structure or in geological conditions. In particular, for a tunnel blast design, it is reasonable to refer to previous similar design cases because the blast design in the stage of design is a preliminary design, considering that it is general to perform additional blast design through test blasts prior to the start of tunnel excavation. Meanwhile, entering the industry 4.0 era, artificial intelligence (AI) of which availability is surging across whole industry sector is broadly utilized to tunnel and blasting. For a drill and blast tunnel, AI is mainly applied for the estimation of blast vibration and rock mass classification, etc. however, there are few cases where it is applied to blast pattern design. Thus, this study attempts to automate tunnel blast design by means of machine learning, a branch of artificial intelligence. For this, the data related to a blast design was collected from 25 tunnel design reports for learning as well as 2 additional reports for the test, and from which 4 design parameters, i.e., rock mass class, road type and cross sectional area of upper section as well as bench section as input data as well as16 design elements, i.e., blast cut type, specific charge, the number of drill holes, and spacing and burden for each blast hole group, etc. as output. Based on this design data, three machine learning models, i.e., XGBoost, ANN, SVM, were tested and XGBoost was chosen as the best model and the results show a generally similar trend to an actual design when assumed design parameters were input. It is not enough yet to perform the whole blast design using the results from this study, however, it is planned that additional studies will be carried out to make it possible to put it to practical use after collecting more sufficient blast design data and supplementing detailed machine learning processes.

Introduction to convolutional neural network using Keras; an understanding from a statistician

  • Lee, Hagyeong;Song, Jongwoo
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.6
    • /
    • pp.591-610
    • /
    • 2019
  • Deep Learning is one of the machine learning methods to find features from a huge data using non-linear transformation. It is now commonly used for supervised learning in many fields. In particular, Convolutional Neural Network (CNN) is the best technique for the image classification since 2012. For users who consider deep learning models for real-world applications, Keras is a popular API for neural networks written in Python and also can be used in R. We try examine the parameter estimation procedures of Deep Neural Network and structures of CNN models from basics to advanced techniques. We also try to figure out some crucial steps in CNN that can improve image classification performance in the CIFAR10 dataset using Keras. We found that several stacks of convolutional layers and batch normalization could improve prediction performance. We also compared image classification performances with other machine learning methods, including K-Nearest Neighbors (K-NN), Random Forest, and XGBoost, in both MNIST and CIFAR10 dataset.

A Study on Variant Malware Detection Techniques Using Static and Dynamic Features

  • Kang, Jinsu;Won, Yoojae
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.882-895
    • /
    • 2020
  • The amount of malware increases exponentially every day and poses a threat to networks and operating systems. Most new malware is a variant of existing malware. It is difficult to deal with numerous malware variants since they bypass the existing signature-based malware detection method. Thus, research on automated methods of detecting and processing variant malware has been continuously conducted. This report proposes a method of extracting feature data from files and detecting malware using machine learning. Feature data were extracted from 7,000 malware and 3,000 benign files using static and dynamic malware analysis tools. A malware classification model was constructed using multiple DNN, XGBoost, and RandomForest layers and the performance was analyzed. The proposed method achieved up to 96.3% accuracy.