• Title/Summary/Keyword: 의사결정나무알고리즘

Search Result 106, Processing Time 0.022 seconds

A technology of structure information of decision tree transfered in distributed environment (분산 환경에서 전송되는 의사결정나무 구조 정보 표현 기술)

  • Kim, Choong-Gon;Baik, Sung-Wook
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10c
    • /
    • pp.195-198
    • /
    • 2006
  • 분산형 데이터 마이닝에서는 의사결정나무 알고리즘을 사용한다. 의사결정나무 알고리즘을 사용하여 분산된 정보를 데이터 마이닝 하기 위해서 의사결정나무 구조 정보가 없는 에이전트에서는 의사결정나무 구조 정보를 가진 에이전트로부터 정보를 받아야 한다. 일반적으로 네트워크의 전승속도는 제한이 있고 환경마다 속도도 다르기 때문에 분산형 데이터 마이닝이 비분산형 데이터 마이닝 보다 효율적으로 실행되기 위해서 의사결정나무 구조 정보의 전송량이 최대한 작아야 한다. 본 논문에서 의사결정나무 구조 정보를 전달하는 방법과 그 정보를 보다 효율적으로 전송하는 구현방법에 대해 제시한다. (본 연구는 서울시 신기술연구개발지원사업의 지원에 의하여 이루어진 것임)

  • PDF

Customer Churning Forecasting and Strategic Implication in Online Auto Insurance using Decision Tree Algorithms (의사결정나무를 이용한 온라인 자동차 보험 고객 이탈 예측과 전략적 시사점)

  • Lim, Se-Hun;Hur, Yeon
    • Information Systems Review
    • /
    • v.8 no.3
    • /
    • pp.125-134
    • /
    • 2006
  • This article adopts a decision tree algorithm(C5.0) to predict customer churning in online auto insurance environment. Using a sample of on-line auto insurance customers contracts sold between 2003 and 2004, we test how decision tree-based model(C5.0) works on the prediction of customer churning. We compare the result of C5.0 with those of logistic regression model(LRM), multivariate discriminant analysis(MDA) model. The result shows C5.0 outperforms other models in the predictability. Based on the result, this study suggests a way of setting marketing strategy and of developing online auto insurance business.

A study for improving data mining methods for continuous response variables (연속형 반응변수를 위한 데이터마이닝 방법 성능 향상 연구)

  • Choi, Jin-Soo;Lee, Seok-Hyung;Cho, Hyung-Jun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.5
    • /
    • pp.917-926
    • /
    • 2010
  • It is known that bagging and boosting techniques improve the performance in classification problem. A number of researchers have proved the high performance of bagging and boosting through experiments for categorical response but not for continuous response. We study whether bagging and boosting improve data mining methods for continuous responses such as linear regression, decision tree, neural network through bagging and boosting. The analysis of eight real data sets prove the high performance of bagging and boosting empirically.

S-QUEST와 태아발육제한증 (IUGR) 조기진단시스템 개발

  • Cha, Gyeong-Jun;Park, Mun-Il;Choe, Hang-Seok;Sin, Yeong-Jae
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.05a
    • /
    • pp.171-176
    • /
    • 2003
  • 방대한 양의 데이터에서 의사결정에 필요한 정보를 발견하는 일련의 과정을 데이터 마이닝 (data mining)이라고 하는데, 본 연구에서는 생물정보학 (bioinofmatics)의 한분야로서 의학분야의 통계적 의사결정 시스템을 제공하는 의사결정나무 (decision tree) 알고리즘 중 QUEST를 S-PLUS로 구현하고(이하 S-QUEST) 발육제한(Intrauterine Growth Restriction; IUGR) 데이터를 분석하였다.

  • PDF

Measuring Pattern Recognition from Decision Tree and Geometric Data Analysis of Industrial CR Images (산업용 CR영상의 기하학적 데이터 분석과 의사결정나무에 의한 측정 패턴인식)

  • Hwang, Jung-Won;Hwang, Jae-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.56-62
    • /
    • 2008
  • This paper proposes the use of decision tree classification for the measuring pattern recognition from industrial Computed Radiography(CR) images used in nondestructive evaluation(NDE) of steel-tubes. It appears that NDE problems are naturally desired to have machine learning techniques identify patterns and their classification. The attributes of decision tree are taken from NDE test procedure. Geometric features, such as radiative angle, gradient and distance, are estimated from the analysis of input image data. These factors are used to make it easy and accurate to classify an input object to one of the pre-specified classes on decision tree. This algerian is to simplify the characterization of NDE results and to facilitate the determination of features. The experimental results verify the usefulness of proposed algorithm.

의사결정나무에서 순서형 분리 변수 선택에 관한 연구

  • 김현중;송주미
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2004.11a
    • /
    • pp.283-288
    • /
    • 2004
  • 지금까지 의사결정나무에서 분리 변수의 선택에 관한 연구는 많았으나, 대부분 연속형 변수와 명목형 변수에 국한되어 왔다. 본 연구에서는 순서형 변수에 주목하여 CART, QUEST, CRUISE 등 기존 알고리즘과 본 연구에서 제안하는 비모수적 접근 방법인 K-S test, framer-von Misos test 방법의 변수 선택력을 비교하였다. 그 결과 본 연구에서 제안하는 framer-von Mises test 방법이 다른 알고리즘에 비하여, 변수 선택력과 안정성에 있어서 좋은 성과를 보였다.

  • PDF

A study on decision tree creation using intervening variable (매개 변수를 이용한 의사결정나무 생성에 관한 연구)

  • Cho, Kwang-Hyun;Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.4
    • /
    • pp.671-678
    • /
    • 2011
  • Data mining searches for interesting relationships among items in a given database. The methods of data mining are decision tree, association rules, clustering, neural network and so on. The decision tree approach is most useful in classification problems and to divide the search space into rectangular regions. Decision tree algorithms are used extensively for data mining in many domains such as retail target marketing, customer classification, etc. When create decision tree model, complicated model by standard of model creation and number of input variable is produced. Specially, there is difficulty in model creation and analysis in case of there are a lot of numbers of input variable. In this study, we study on decision tree using intervening variable. We apply to actuality data to suggest method that remove unnecessary input variable for created model and search the efficiency.

A study on decision tree creation using marginally conditional variables (주변조건부 변수를 이용한 의사결정나무모형 생성에 관한 연구)

  • Cho, Kwang-Hyun;Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.2
    • /
    • pp.299-307
    • /
    • 2012
  • Data mining is a method of searching for an interesting relationship among items in a given database. The decision tree is a typical algorithm of data mining. The decision tree is the method that classifies or predicts a group as some subgroups. In general, when researchers create a decision tree model, the generated model can be complicated by the standard of model creation and the number of input variables. In particular, if the decision trees have a large number of input variables in a model, the generated models can be complex and difficult to analyze model. When creating the decision tree model, if there are marginally conditional variables (intervening variables, external variables) in the input variables, it is not directly relevant. In this study, we suggest the method of creating a decision tree using marginally conditional variables and apply to actual data to search for efficiency.

A study on removal of unnecessary input variables using multiple external association rule (다중외적연관성규칙을 이용한 불필요한 입력변수 제거에 관한 연구)

  • Cho, Kwang-Hyun;Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.5
    • /
    • pp.877-884
    • /
    • 2011
  • The decision tree is a representative algorithm of data mining and used in many domains such as retail target marketing, fraud detection, data reduction, variable screening, category merging, etc. This method is most useful in classification problems, and to make predictions for a target group after dividing it into several small groups. When we create a model of decision tree with a large number of input variables, we suffer difficulties in exploration and analysis of the model because of complex trees. And we can often find some association exist between input variables by external variables despite of no intrinsic association. In this paper, we study on the removal method of unnecessary input variables using multiple external association rules. And then we apply the removal method to actual data for its efficiencies.

Ordinal Variable Selection in Decision Trees (의사결정나무에서 순서형 분리변수 선택에 관한 연구)

  • Kim Hyun-Joong
    • The Korean Journal of Applied Statistics
    • /
    • v.19 no.1
    • /
    • pp.149-161
    • /
    • 2006
  • The most important component in decision tree algorithm is the rule for split variable selection. Many earlier algorithms such as CART and C4.5 use greedy search algorithm for variable selection. Recently, many methods were developed to cope with the weakness of greedy search algorithm. Most algorithms have different selection criteria depending on the type of variables: continuous or nominal. However, ordinal type variables are usually treated as continuous ones. This approach did not cause any trouble for the methods using greedy search algorithm. However, it may cause problems for the newer algorithms because they use statistical methods valid for continuous or nominal types only. In this paper, we propose a ordinal variable selection method that uses Cramer-von Mises testing procedure. We performed comparisons among CART, C4.5, QUEST, CRUISE, and the new method. It was shown that the new method has a good variable selection power for ordinal type variables.