• Title/Summary/Keyword: 결정나무분석

Search Result 520, Processing Time 0.032 seconds

의사결정나무모형을 이용한 교통사고 유형 분석

  • 김유진;최종후;이의용
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2000.11a
    • /
    • pp.257-260
    • /
    • 2000
  • 본 연구에서는 의사결정나무모형을 이용하여 교통사고 유형 분석을 시도한다. 분석에 이용된 자료는 도로교통안전관리공단에서 수집한 교통사고 정밀조사 자료이다. 본 연구에서 목표변수는 '사고내용'이며, 설명변수는 '인적 요인', '차량적 요인', '도로 환경적 요인' 관련 변수이다. 목표변수에 주요한 기여를 하는 주요 설명변수를 도출하였으며, 얻어진 의사결정나무모형을 토대로 하여 교통사고를 유형화하였다.

  • PDF

의사결정나무와 대응분석을 이용한 사이버 쇼핑몰의 연구

  • Go, Bong-Seong;Kim, Yeon-Hyeong
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.12-12
    • /
    • 2001
  • 정보기술을 바탕으로 전자상거래의 규모는 빠르게 늘어가고 있다. 본 연구에서는 종합쇼핑몰의 성격을 띠는 사이버 쇼핑몰의 고객과 구매 고객의 특성 등을 살펴보고 의사결정나무를 이용한 이탈고객의 분류, 쇼핑몰에 등록된 상품군과 인구특성적인 변수들간의 대응분석을 실시하여 쇼핑몰에 대한 인식을 제고한다.

  • PDF

Customer Churning Forecasting and Strategic Implication in Online Auto Insurance using Decision Tree Algorithms (의사결정나무를 이용한 온라인 자동차 보험 고객 이탈 예측과 전략적 시사점)

  • Lim, Se-Hun;Hur, Yeon
    • Information Systems Review
    • /
    • v.8 no.3
    • /
    • pp.125-134
    • /
    • 2006
  • This article adopts a decision tree algorithm(C5.0) to predict customer churning in online auto insurance environment. Using a sample of on-line auto insurance customers contracts sold between 2003 and 2004, we test how decision tree-based model(C5.0) works on the prediction of customer churning. We compare the result of C5.0 with those of logistic regression model(LRM), multivariate discriminant analysis(MDA) model. The result shows C5.0 outperforms other models in the predictability. Based on the result, this study suggests a way of setting marketing strategy and of developing online auto insurance business.

Dynamic Decision Tree for Data Mining (데이터마이닝을 위한 동적 결정나무)

  • Choi, Byong-Su;Cha, Woon-Ock
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.6
    • /
    • pp.959-969
    • /
    • 2009
  • Decision tree is a typical tool for data classification. This tool is implemented in DAVIS (Huh and Song, 2002). All the visualization tools and statistical clustering tools implemented in DAVIS can communicate with the decision tree. This paper presents methods to apply data visualization techniques to the decision tree using a real data set.

An Analysis of Choice Behavior for Tour Type of Commercial Vehicle using Decision Tree (의사결정나무를 이용한 화물자동차 투어유형 선택행태 분석)

  • Kim, Han-Su;Park, Dong-Ju;Kim, Chan-Seong;Choe, Chang-Ho;Kim, Gyeong-Su
    • Journal of Korean Society of Transportation
    • /
    • v.28 no.6
    • /
    • pp.43-54
    • /
    • 2010
  • In recent years there have been studies on tour based approaches for freight travel demand modelling. The purpose of this paper is to analyze tour type choice behavior of commercial vehicles which are divided into round trips and chained tours. The methods of the study are based on the decision tree and the logit model. The results indicates that the explanation variables for classifying tour types of commercial vehicles are loading factor, average goods quantity, and total goods quantity. The results of the decision tree method are similar to those of logit model. In addition, the explanation variables for tour type classification of small trucks are not different from those for medium trucks', implying that the most important factor on the vehicle tour planning is how to load goods such as shipment size and total quantity.

A study on the comparison of descriptive variables reduction methods in decision tree induction: A case of prediction models of pension insurance in life insurance company (생명보험사의 개인연금 보험예측 사례를 통해서 본 의사결정나무 분석의 설명변수 축소에 관한 비교 연구)

  • Lee, Yong-Goo;Hur, Joon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.1
    • /
    • pp.179-190
    • /
    • 2009
  • In the financial industry, the decision tree algorithm has been widely used for classification analysis. In this case one of the major difficulties is that there are so many explanatory variables to be considered for modeling. So we do need to find effective method for reducing the number of explanatory variables under condition that the modeling results are not affected seriously. In this research, we try to compare the various variable reducing methods and to find the best method based on the modeling accuracy for the tree algorithm. We applied the methods on the pension insurance of a insurance company for getting empirical results. As a result, we found that selecting variables by using the sensitivity analysis of neural network method is the most effective method for reducing the number of variables while keeping the accuracy.

  • PDF

Classification and Recognition of Movement Behavior of Animal based on Decision Tree (의사결정나무를 이용한 생물의 행동 패턴 구분과 인식)

  • Lee, Seng-Tai;Kim, Sung-Shin
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.11a
    • /
    • pp.225-228
    • /
    • 2005
  • 본 논문에서는 생물의 2차원영상에서 4가지의 특징을 추출한 다음 약품에 대한 생물의 행동 패턴 반응에 대하여 의사결정나무를 적용하여 패턴의 인식 및 분류를 하였다. 생물의 행동패턴을 대변하는 물리적인 특징인 속도, 방향전환 각도, 이동거리에 대하여 각각 중간이상속도비율, FFT(Fast Fourier Transformation), 2차원 히스토그램 면적, 프렉탈, 무게중심을 사용하여 특징을 추출하였다. 이렇게 추출된 4가지의 특징변수들을 사용하여 의사결정나무 모델을 구성한 다음 생물의 약품 첨가에 대한 반응을 분석하였다. 또한 결과에서는 기존의 생물의 행동패턴 구분에 쓰였던 전형적인 기법(conventional methods)보다 본 연구에서 적용한 의사결정나무가 생물의 행동패턴이 가지는 물리적 요소에 대한 독해력을 가짐을 보임으로써 특정환경에서 이동행동에 대한 분석을 용이하게 하고자 하였다.

  • PDF

Comparative Analysis of Predictors of Depression for Residents in a Metropolitan City using Logistic Regression and Decision Making Tree (로지스틱 회귀분석과 의사결정나무 분석을 이용한 일 대도시 주민의 우울 예측요인 비교 연구)

  • Kim, Soo-Jin;Kim, Bo-Young
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.829-839
    • /
    • 2013
  • This study is a descriptive research study with the purpose of predicting and comparing factors of depression affecting residents in a metropolitan city by using logistic regression analysis and decision-making tree analysis. The subjects for the study were 462 residents ($20{\leq}aged{\angle}65$) in a metropolitan city. This study collected data between October 7, 2011 and October 21, 2011 and analyzed them with frequency analysis, percentage, the mean and standard deviation, ${\chi}^2$-test, t-test, logistic regression analysis, roc curve, and a decision-making tree by using SPSS 18.0 program. The common predicting variables of depression in community residents were social dysfunction, perceived physical symptom, and family support. The specialty and sensitivity of logistic regression explained 93.8% and 42.5%. The receiver operating characteristic (roc) curve was used to determine an optimal model. The AUC (area under the curve) was .84. Roc curve was found to be statistically significant (p=<.001). The specialty and sensitivity of decision-making tree analysis were 98.3% and 20.8% respectively. As for the whole classification accuracy, the logistic regression explained 82.0% and the decision making tree analysis explained 80.5%. From the results of this study, it is believed that the sensitivity, the classification accuracy, and the logistics regression analysis as shown in a higher degree may be useful materials to establish a depression prediction model for the community residents.

의사결정나무를 이용한 개인휴대통신 해지자 분석

  • 최종후;서두성
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1998.10a
    • /
    • pp.377-380
    • /
    • 1998
  • 본 논문에서는 최근 데이터마이닝의 도구로 활발하게 소개되고 있는 의사결정나무 분석을 이용하여 개인휴대통신의 해지자 분석을 실시한다. 또한 로지스틱 회귀모형을 이용하여 가입고객의 해지 가능성에 대한 점수화를 시도한다.

  • PDF

A study on decision tree creation using intervening variable (매개 변수를 이용한 의사결정나무 생성에 관한 연구)

  • Cho, Kwang-Hyun;Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.4
    • /
    • pp.671-678
    • /
    • 2011
  • Data mining searches for interesting relationships among items in a given database. The methods of data mining are decision tree, association rules, clustering, neural network and so on. The decision tree approach is most useful in classification problems and to divide the search space into rectangular regions. Decision tree algorithms are used extensively for data mining in many domains such as retail target marketing, customer classification, etc. When create decision tree model, complicated model by standard of model creation and number of input variable is produced. Specially, there is difficulty in model creation and analysis in case of there are a lot of numbers of input variable. In this study, we study on decision tree using intervening variable. We apply to actuality data to suggest method that remove unnecessary input variable for created model and search the efficiency.