• Title/Summary/Keyword: optimizing decision tree

Search Result 10, Processing Time 0.029 seconds

Feature Selection and Hyper-Parameter Tuning for Optimizing Decision Tree Algorithm on Heart Disease Classification

  • Tsehay Admassu Assegie;Sushma S.J;Bhavya B.G;Padmashree S
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.150-154
    • /
    • 2024
  • In recent years, there are extensive researches on the applications of machine learning to the automation and decision support for medical experts during disease detection. However, the performance of machine learning still needs improvement so that machine learning model produces result that is more accurate and reliable for disease detection. Selecting the hyper-parameter that could produce the possible maximum classification accuracy on medical dataset is the most challenging task in developing decision support systems with machine learning algorithms for medical dataset classification. Moreover, selecting the features that best characterizes a disease is another challenge in developing machine-learning model with better classification accuracy. In this study, we have proposed an optimized decision tree model for heart disease classification by using heart disease dataset collected from kaggle data repository. The proposed model is evaluated and experimental test reveals that the performance of decision tree improves when an optimal number of features are used for training. Overall, the accuracy of the proposed decision tree model is 98.2% for heart disease classification.

A Method for Optimizing the Structure of Neural Networks Based on Information Entropy

  • Yuan Hongchun;Xiong Fanlnu;Kei, Bai-Shi
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.30-33
    • /
    • 2001
  • The number of hidden neurons of the feed-forward neural networks is generally decided on the basis of experience. The method usually results in the lack or redundancy of hidden neurons, and causes the shortage of capacity for storing information of learning overmuch. This research proposes a new method for optimizing the number of hidden neurons bases on information entropy, Firstly, an initial neural network with enough hidden neurons should be trained by a set of training samples. Second, the activation values of hidden neurons should be calculated by inputting the training samples that can be identified correctly by the trained neural network. Third, all kinds of partitions should be tried and its information gain should be calculated, and then a decision-tree correctly dividing the whole sample space can be constructed. Finally, the important and related hidden neurons that are included in the tree can be found by searching the whole tree, and other redundant hidden neurons can be deleted. Thus, the number of hidden neurons can be decided. In the case of building a neural network with the best number of hidden units for tea quality evaluation, the proposed method is applied. And the result shows that the method is effective

  • PDF

A Survey of Applications of Artificial Intelligence Algorithms in Eco-environmental Modelling

  • Kim, Kang-Suk;Park, Joon-Hong
    • Environmental Engineering Research
    • /
    • v.14 no.2
    • /
    • pp.102-110
    • /
    • 2009
  • Application of artificial intelligence (AI) approaches in eco-environmental modeling has gradually increased for the last decade. Comprehensive understanding and evaluation on the applicability of this approach to eco-environmental modeling are needed. In this study, we reviewed the previous studies that used AI-techniques in eco-environmental modeling. Decision Tree (DT) and Artificial Neural Network (ANN) were found to be major AI algorithms preferred by researchers in ecological and environmental modeling areas. When the effect of the size of training data on model prediction accuracy was explored using the data from the previous studies, the prediction accuracy and the size of training data showed nonlinear correlation, which was best-described by hyperbolic saturation function among the tested nonlinear functions including power and logarithmic functions. The hyperbolic saturation equations were proposed to be used as a guideline for optimizing the size of training data set, which is critically important in designing the field experiments required for training AI-based eco-environmental modeling.

A Study on Selection of Block Stockyard Applying Decision Tree Learning Algorithm (의사결정트리 학습을 적용한 조선소 블록 적치 위치 선정에 관한 연구)

  • Nam, Byeong-Wook;Lee, Kyung-Ho;Lee, Jae-Joon;Mun, Seung-Hwan
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.54 no.5
    • /
    • pp.421-429
    • /
    • 2017
  • It is very important to manage the position of the blocks in the shipyard where the work is completed, or the blocks need to be moved for the next process operation. The moving distance of the block increases according to the position of the block stockyard. As the travel distance increases, the number of trips and travel distance of the transporter increases, which causes a great deal of operation cost. Currently, the selection of the block position in the shipyard is based on the know-how of picking up a transporter worker by the production schedule of the block, and the location where the block is to be placed is determined according to the situation in the stockyard. The know-how to select the position of the block is the result of optimizing the position of the block in the shipyard for a long time. In this study, we used the accumulated data as a result of the operation of the yard in the shipyard and tried to select the location of blocks by learning it. Decision tree learning algorithm was used for learning, and a prototype was developed using it. Finally, we prove the possibility of selecting a block stockyard through this algorithm.

Optimizing shallow foundation design: A machine learning approach for bearing capacity estimation over cavities

  • Kumar Shubham;Subhadeep Metya;Abdhesh Kumar Sinha
    • Geomechanics and Engineering
    • /
    • v.37 no.6
    • /
    • pp.629-641
    • /
    • 2024
  • The presence of excavations or cavities beneath the foundations of a building can have a significant impact on their stability and cause extensive damage. Traditional methods for calculating the bearing capacity and subsidence of foundations over cavities can be complex and time-consuming, particularly when dealing with conditions that vary. In such situations, machine learning (ML) and deep learning (DL) techniques provide effective alternatives. This study concentrates on constructing a prediction model based on the performance of ML and DL algorithms that can be applied in real-world settings. The efficacy of eight algorithms, including Regression Analysis, k-Nearest Neighbor, Decision Tree, Random Forest, Multivariate Regression Spline, Artificial Neural Network, and Deep Neural Network, was evaluated. Using a Python-assisted automation technique integrated with the PLAXIS 2D platform, a dataset containing 272 cases with eight input parameters and one target variable was generated. In general, the DL model performed better than the ML models, and all models, except the regression models, attained outstanding results with an R2 greater than 0.90. These models can also be used as surrogate models in reliability analysis to evaluate failure risks and probabilities.

Machine learning application to seismic site classification prediction model using Horizontal-to-Vertical Spectral Ratio (HVSR) of strong-ground motions

  • Francis G. Phi;Bumsu Cho;Jungeun Kim;Hyungik Cho;Yun Wook Choo;Dookie Kim;Inhi Kim
    • Geomechanics and Engineering
    • /
    • v.37 no.6
    • /
    • pp.539-554
    • /
    • 2024
  • This study explores development of prediction model for seismic site classification through the integration of machine learning techniques with horizontal-to-vertical spectral ratio (HVSR) methodologies. To improve model accuracy, the research employs outlier detection methods and, synthetic minority over-sampling technique (SMOTE) for data balance, and evaluates using seven machine learning models using seismic data from KiK-net. Notably, light gradient boosting method (LGBM), gradient boosting, and decision tree models exhibit improved performance when coupled with SMOTE, while Multiple linear regression (MLR) and Support vector machine (SVM) models show reduced efficacy. Outlier detection techniques significantly enhance accuracy, particularly for LGBM, gradient boosting, and voting boosting. The ensemble of LGBM with the isolation forest and SMOTE achieves the highest accuracy of 0.91, with LGBM and local outlier factor yielding the highest F1-score of 0.79. Consistently outperforming other models, LGBM proves most efficient for seismic site classification when supported by appropriate preprocessing procedures. These findings show the significance of outlier detection and data balancing for precise seismic soil classification prediction, offering insights and highlighting the potential of machine learning in optimizing site classification accuracy.

Decision based uncertainty model to predict rockburst in underground engineering structures using gradient boosting algorithms

  • Kidega, Richard;Ondiaka, Mary Nelima;Maina, Duncan;Jonah, Kiptanui Arap Too;Kamran, Muhammad
    • Geomechanics and Engineering
    • /
    • v.30 no.3
    • /
    • pp.259-272
    • /
    • 2022
  • Rockburst is a dynamic, multivariate, and non-linear phenomenon that occurs in underground mining and civil engineering structures. Predicting rockburst is challenging since conventional models are not standardized. Hence, machine learning techniques would improve the prediction accuracies. This study describes decision based uncertainty models to predict rockburst in underground engineering structures using gradient boosting algorithms (GBM). The model input variables were uniaxial compressive strength (UCS), uniaxial tensile strength (UTS), maximum tangential stress (MTS), excavation depth (D), stress ratio (SR), and brittleness coefficient (BC). Several models were trained using different combinations of the input variables and a 3-fold cross-validation resampling procedure. The hyperparameters comprising learning rate, number of boosting iterations, tree depth, and number of minimum observations were tuned to attain the optimum models. The performance of the models was tested using classification accuracy, Cohen's kappa coefficient (k), sensitivity and specificity. The best-performing model showed a classification accuracy, k, sensitivity and specificity values of 98%, 93%, 1.00 and 0.957 respectively by optimizing model ROC metrics. The most and least influential input variables were MTS and BC, respectively. The partial dependence plots revealed the relationship between the changes in the input variables and model predictions. The findings reveal that GBM can be used to anticipate rockburst and guide decisions about support requirements before mining development.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Development of Prediction Model for Nitrogen Oxides Emission Using Artificial Intelligence (인공지능 기반 질소산화물 배출량 예측을 위한 연구모형 개발)

  • Jo, Ha-Nui;Park, Jisu;Yun, Yongju
    • Korean Chemical Engineering Research
    • /
    • v.58 no.4
    • /
    • pp.588-595
    • /
    • 2020
  • Prediction and control of nitrogen oxides (NOx) emission is of great interest in industry due to stricter environmental regulations. Herein, we propose an artificial intelligence (AI)-based framework for prediction of NOx emission. The framework includes pre-processing of data for training of neural networks and evaluation of the AI-based models. In this work, Long-Short-Term Memory (LSTM), one of the recurrent neural networks, was adopted to reflect the time series characteristics of NOx emissions. A decision tree was used to determine a time window of LSTM prior to training of the network. The neural network was trained with operational data from a heating furnace. The optimal model was obtained by optimizing hyper-parameters. The LSTM model provided a reliable prediction of NOx emission for both training and test data, showing an accuracy of 93% or more. The application of the proposed AI-based framework will provide new opportunities for predicting the emission of various air pollutants with time series characteristics.

Methodology for Variable Optimization in Injection Molding Process (사출 성형 공정에서의 변수 최적화 방법론)

  • Jung, Young Jin;Kang, Tae Ho;Park, Jeong In;Cho, Joong Yeon;Hong, Ji Soo;Kang, Sung Woo
    • Journal of Korean Society for Quality Management
    • /
    • v.52 no.1
    • /
    • pp.43-56
    • /
    • 2024
  • Purpose: The injection molding process, crucial for plastic shaping, encounters difficulties in sustaining product quality when replacing injection machines. Variations in machine types and outputs between different production lines or factories increase the risk of quality deterioration. In response, the study aims to develop a system that optimally adjusts conditions during the replacement of injection machines linked to molds. Methods: Utilizing a dataset of 12 injection process variables and 52 corresponding sensor variables, a predictive model is crafted using Decision Tree, Random Forest, and XGBoost. Model evaluation is conducted using an 80% training data and a 20% test data split. The dependent variable, classified into five characteristics based on temperature and pressure, guides the prediction model. Bayesian optimization, integrated into the selected model, determines optimal values for process variables during the replacement of injection machines. The iterative convergence of sensor prediction values to the optimum range is visually confirmed, aligning them with the target range. Experimental results validate the proposed approach. Results: Post-experiment analysis indicates the superiority of the XGBoost model across all five characteristics, achieving a combined high performance of 0.81 and a Mean Absolute Error (MAE) of 0.77. The study introduces a method for optimizing initial conditions in the injection process during machine replacement, utilizing Bayesian optimization. This streamlined approach reduces both time and costs, thereby enhancing process efficiency. Conclusion: This research contributes practical insights to the optimization literature, offering valuable guidance for industries seeking streamlined and cost-effective methods for machine replacement in injection molding.