• Title/Summary/Keyword: decision trees

Search Result 295, Processing Time 0.023 seconds

Comparison among Algorithms for Decision Tree based on Sasang Constitutional Clinical Data (사상체질 임상자료 기반 의사결정나무 생성 알고리즘 비교)

  • Jin, Hee-Jeong;Lee, Su-Kyung;Lee, Si-Woo
    • Korean Journal of Oriental Medicine
    • /
    • v.17 no.2
    • /
    • pp.121-127
    • /
    • 2011
  • Objectives : In the clinical field, it is important to understand the factors that have effects on a certain disease or symptom. For this, many researchers apply Data Mining method to the clinical data that they have collected. One of the efficient methods for Data Mining is decision tree induction. Many researchers have studied to find the best split criteria of decision tree; however, various split criteria coexist. Methods : In this paper, we applied several split criteria(Information Gain, Gini Index, Chi-Square) to Sasang constitutional clinical information and compared each decision tree in order to find optimal split criteria. Results & Conclusion : We found BMI and body measurement factors are important factors to Sasang constitution by analyzing produced decision trees with different split measures. And the decision tree using information gain had the highest accuracy. However, the decision tree that produced highest accuracy is changed depending on given data. So, researcher have to try to find proper split criteria for given data by understanding attribute of the given data.

Splitting Decision Tree Nodes with Multiple Target Variables (의사결정나무에서 다중 목표변수를 고려한)

  • 김성준
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.05a
    • /
    • pp.243-246
    • /
    • 2003
  • Data mining is a process of discovering useful patterns for decision making from an amount of data. It has recently received much attention in a wide range of business and engineering fields Classifying a group into subgroups is one of the most important subjects in data mining Tree-based methods, known as decision trees, provide an efficient way to finding classification models. The primary concern in tree learning is to minimize a node impurity, which is evaluated using a target variable in the data set. However, there are situations where multiple target variables should be taken into account, for example, such as manufacturing process monitoring, marketing science, and clinical and health analysis. The purpose of this article is to present several methods for measuring the node impurity, which are applicable to data sets with multiple target variables. For illustrations, numerical examples are given with discussion.

  • PDF

Rule Selection Method in Decision Tree Models (의사결정나무 모델에서의 중요 룰 선택기법)

  • Son, Jieun;Kim, Seoung Bum
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.40 no.4
    • /
    • pp.375-381
    • /
    • 2014
  • Data mining is a process of discovering useful patterns or information from large amount of data. Decision tree is one of the data mining algorithms that can be used for both classification and prediction and has been widely used for various applications because of its flexibility and interpretability. Decision trees for classification generally generate a number of rules that belong to one of the predefined category and some rules may belong to the same category. In this case, it is necessary to determine the significance of each rule so as to provide the priority of the rule with users. The purpose of this paper is to propose a rule selection method in classification tree models that accommodate the umber of observation, accuracy, and effectiveness in each rule. Our experiments demonstrate that the proposed method produce better performance compared to other existing rule selection methods.

A methodology for Internet Customer segmentation using Decision Trees

  • Cho, Y.B.;Kim, S.H.
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2003.05a
    • /
    • pp.206-213
    • /
    • 2003
  • Application of existing decision tree algorithms for Internet retail customer classification is apt to construct a bushy tree due to imprecise source data. Even excessive analysis may not guarantee the effectiveness of the business although the results are derived from fully detailed segments. Thus, it is necessary to determine the appropriate number of segments with a certain level of abstraction. In this study, we developed a stopping rule that considers the total amount of information gained while generating a rule tree. In addition to forwarding from root to intermediate nodes with a certain level of abstraction, the decision tree is investigated by the backtracking pruning method with misclassification loss information.

  • PDF

Multivariate Decision Tree for High -dimensional Response Vector with Its Application

  • Lee, Seong-Keon
    • Communications for Statistical Applications and Methods
    • /
    • v.11 no.3
    • /
    • pp.539-551
    • /
    • 2004
  • Multiple responses are often observed in many application fields, such as customer's time-of-day pattern for using internet. Some decision trees for multiple responses have been constructed by many researchers. However, if the response is a high-dimensional vector that can be thought of as a discretized function, then fitting a multivariate decision tree may be unsuccessful. Yu and Lambert (1999) suggested spline tree and principal component tree to analyze high dimensional response vector by using dimension reduction techniques. In this paper, we shall propose factor tree which would be more interpretable and competitive. Furthermore, using Korean internet company data, we will analyze time-of-day patterns for internet user.

Investment, Export, and Exchange Rate on Prediction of Employment with Decision Tree, Random Forest, and Gradient Boosting Machine Learning Models (투자와 수출 및 환율의 고용에 대한 의사결정 나무, 랜덤 포레스트와 그래디언트 부스팅 머신러닝 모형 예측)

  • Chae-Deug Yi
    • Korea Trade Review
    • /
    • v.46 no.2
    • /
    • pp.281-299
    • /
    • 2021
  • This paper analyzes the feasibility of using machine learning methods to forecast the employment. The machine learning methods, such as decision tree, artificial neural network, and ensemble models such as random forest and gradient boosting regression tree were used to forecast the employment in Busan regional economy. The following were the main findings of the comparison of their predictive abilities. First, the forecasting power of machine learning methods can predict the employment well. Second, the forecasting values for the employment by decision tree models appeared somewhat differently according to the depth of decision trees. Third, the predictive power of artificial neural network model, however, does not show the high predictive power. Fourth, the ensemble models such as random forest and gradient boosting regression tree model show the higher predictive power. Thus, since the machine learning method can accurately predict the employment, we need to improve the accuracy of forecasting employment with the use of machine learning methods.

Using CART to Evaluate Performance of Tree Model (CART를 이용한 Tree Model의 성능평가)

  • Jung, Yong Gyu;Kwon, Na Yeon;Lee, Young Ho
    • Journal of Service Research and Studies
    • /
    • v.3 no.1
    • /
    • pp.9-16
    • /
    • 2013
  • Data analysis is the universal classification techniques, which requires a lot of effort. It can be easily analyzed to understand the results. Decision tree which is developed by Breiman can be the most representative methods. There are two core contents in decision tree. One of the core content is to divide dimensional space of the independent variables repeatedly, Another is pruning using the data for evaluation. In classification problem, the response variables are categorical variables. It should be repeatedly splitting the dimension of the variable space into a multidimensional rectangular non overlapping share. Where the continuous variables, binary, or a scale of sequences, etc. varies. In this paper, we obtain the coefficients of precision, reproducibility and accuracy of the classification tree to classify and evaluate the performance of the new cases, and through experiments to evaluate.

  • PDF

Interesting Node Finding Criteria for Regression Trees (회귀의사결정나무에서의 관심노드 찾는 분류 기준법)

  • 이영섭
    • The Korean Journal of Applied Statistics
    • /
    • v.16 no.1
    • /
    • pp.45-53
    • /
    • 2003
  • One of decision tree method is regression trees which are used to predict a continuous response. The general splitting criteria in tree growing are based on a compromise in the impurity between the left and the right child node. By picking or the more interesting subsets and ignoring the other, the proposed new splitting criteria in this paper do not split based on a compromise of child nodes anymore. The tree structure by the new criteria might be unbalanced but plausible. It can find a interesting subset as early as possible and express it by a simple clause. As a result, it is very interpretable by sacrificing a little bit of accuracy.

Building a Model for Estimate the Soil Organic Carbon Using Decision Tree Algorithm (의사결정나무를 이용한 토양유기탄소 추정 모델 제작)

  • Yoo, Su-Hong;Heo, Joon;Jung, Jae-Hoon;Han, Su-Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.3
    • /
    • pp.29-35
    • /
    • 2010
  • Soil organic carbon (SOC), being a help to forest formation and control of carbon dioxide in the air, is found to be an important factor by which global warming is influenced. Excavating the samples by whole area is very inefficient method to discovering the distribution of SOC. So, the development of suitable model for expecting the relative amount of the SOC makes better use of expecting the SOC. In the present study, a model based on a decision tree algorithm is introduced to estimate the amount of SOC along with accessing influencing factors such as altitude, aspect, slope and type of trees. The model was applied to a real site and validated by 10-fold cross validation using two softwares, See 5 and Weka. From the results given by See 5, it can be concluded that the amount of SOC in surface layers is highly related to the type of trees, while it is, in middle depth layers, dominated by both type of trees and altitude. The estimation accuracy was rated as 70.8% in surface layers and 64.7% in middle depth layers. A similar result was, in surface layers, given by Weka, but aspect was, in middle depth layers, found to be a meaningful factor along with types of trees and altitude. The estimation accuracy was rated as 68.87% and 60.65% in surface and middle depth layers. The introduced model is, from the tests, conceived to be useful to estimation of SOC amount and its application to SOC map production for wide areas.

An Efficient Pedestrian Detection Approach Using a Novel Split Function of Hough Forests

  • Do, Trung Dung;Vu, Thi Ly;Nguyen, Van Huan;Kim, Hakil;Lee, Chongho
    • Journal of Computing Science and Engineering
    • /
    • v.8 no.4
    • /
    • pp.207-214
    • /
    • 2014
  • In pedestrian detection applications, one of the most popular frameworks that has received extensive attention in recent years is widely known as a 'Hough forest' (HF). To improve the accuracy of detection, this paper proposes a novel split function to exploit the statistical information of the training set stored in each node during the construction of the forest. The proposed split function makes the trees in the forest more robust to noise and illumination changes. Moreover, the errors of each stage in the training forest are minimized using a global loss function to support trees to track harder training samples. After having the forest trained, the standard HF detector follows up to search for and localize instances in the image. Experimental results showed that the detection performance of the proposed framework was improved significantly with respect to the standard HF and alternating decision forest (ADF) in some public datasets.