• Title/Summary/Keyword: Classification and regression tree analysis

Search Result 118, Processing Time 0.024 seconds

분류와 회귀나무분석에 관한 소고 (Note on classification and regression tree analysis)

  • 임용빈;오만숙
    • 품질경영학회지
    • /
    • 제30권1호
    • /
    • pp.152-161
    • /
    • 2002
  • The analysis of large data sets with hundreds of thousands observations and thousands of independent variables is a formidable computational task. A less parametric method, capable of identifying important independent variables and their interactions, is a tree structured approach to regression and classification. It gives a graphical and often illuminating way of looking at data in classification and regression problems. In this paper, we have reviewed and summarized tile methodology used to construct a tree, multiple trees and the sequential strategy for identifying active compounds in large chemical databases.

A Comparative Study of Medical Data Classification Methods Based on Decision Tree and System Reconstruction Analysis

  • Tang, Tzung-I;Zheng, Gang;Huang, Yalou;Shu, Guangfu;Wang, Pengtao
    • Industrial Engineering and Management Systems
    • /
    • 제4권1호
    • /
    • pp.102-108
    • /
    • 2005
  • This paper studies medical data classification methods, comparing decision tree and system reconstruction analysis as applied to heart disease medical data mining. The data we study is collected from patients with coronary heart disease. It has 1,723 records of 71 attributes each. We use the system-reconstruction method to weight it. We use decision tree algorithms, such as induction of decision trees (ID3), classification and regression tree (C4.5), classification and regression tree (CART), Chi-square automatic interaction detector (CHAID), and exhausted CHAID. We use the results to compare the correction rate, leaf number, and tree depth of different decision-tree algorithms. According to the experiments, we know that weighted data can improve the correction rate of coronary heart disease data but has little effect on the tree depth and leaf number.

CART의 예측 성능:은행 및 보험 회사 데이터 사용 (The Prediction Performance of the CART Using Bank and Insurance Company Data)

  • 박정선
    • 한국정보처리학회논문지
    • /
    • 제3권6호
    • /
    • pp.1468-1472
    • /
    • 1996
  • 본 연구에서는 CART(Classification and Regression Tree)가 예측을 함에 있어 통계적인 기법인 discriminant analysis와 비교된다. 은행 데이터를 사용하는 경우 discriminant analysis가 더 나은 성능을 보여줬으며, 보험 회사 데이터를 사용한 경 우 CART가 더 나은 성능을 보여줬다. 이러한 모순된 결과가 데이터의 성격을 분석함 으로 해석된다. 본 연구에서는 두가지 모델 모두 사용된 매개변수들인 사전 확률, 데 이터, 타입 I/II오류 코스트, 검증 방법에 의해 성능의 차이를 보여줬다.

  • PDF

A Comparative Study of Predictive Factors for Hypertension using Logistic Regression Analysis and Decision Tree Analysis

  • SoHyun Kim;SungHyoun Cho
    • Physical Therapy Rehabilitation Science
    • /
    • 제12권2호
    • /
    • pp.80-91
    • /
    • 2023
  • Objective: The purpose of this study is to identify factors that affect the incidence of hypertension using logistic regression and decision tree analysis, and to build and compare predictive models. Design: Secondary data analysis study Methods: We analyzed 9,859 subjects from the Korean health panel annual 2019 data provided by the Korea Institute for Health and Social Affairs and National Health Insurance Service. Frequency analysis, chi-square test, binary logistic regression, and decision tree analysis were performed on the data. Results: In logistic regression analysis, those who were 60 years of age or older (Odds ratio, OR=68.801, p<0.001), those who were divorced/widowhood/separated (OR=1.377, p<0.001), those who graduated from middle school or younger (OR=1, reference), those who did not walk at all (OR=1, reference), those who were obese (OR=5.109, p<0.001), and those who had poor subjective health status (OR=2.163, p<0.001) were more likely to develop hypertension. In the decision tree, those over 60 years of age, overweight or obese, and those who graduated from middle school or younger had the highest probability of developing hypertension at 83.3%. Logistic regression analysis showed a specificity of 85.3% and sensitivity of 47.9%; while decision tree analysis showed a specificity of 81.9% and sensitivity of 52.9%. In classification accuracy, logistic regression and decision tree analysis showed 73.6% and 72.6% prediction, respectively. Conclusions: Both logistic regression and decision tree analysis were adequate to explain the predictive model. It is thought that both analysis methods can be used as useful data for constructing a predictive model for hypertension.

A Comparative Study of Predictive Factors for Passing the National Physical Therapy Examination using Logistic Regression Analysis and Decision Tree Analysis

  • Kim, So Hyun;Cho, Sung Hyoun
    • Physical Therapy Rehabilitation Science
    • /
    • 제11권3호
    • /
    • pp.285-295
    • /
    • 2022
  • Objective: The purpose of this study is to use logistic regression and decision tree analysis to identify the factors that affect the success or failurein the national physical therapy examination; and to build and compare predictive models. Design: Secondary data analysis study Methods: We analyzed 76,727 subjects from the physical therapy national examination data provided by the Korea Health Personnel Licensing Examination Institute. The target variable was pass or fail, and the input variables were gender, age, graduation status, and examination area. Frequency analysis, chi-square test, binary logistic regression, and decision tree analysis were performed on the data. Results: In the logistic regression analysis, subjects in their 20s (Odds ratio, OR=1, reference), expected to graduate (OR=13.616, p<0.001) and from the examination area of Jeju-do (OR=3.135, p<0.001), had a high probability of passing. In the decision tree, the predictive factors for passing result had the greatest influence in the order of graduation status (x2=12366.843, p<0.001) and examination area (x2=312.446, p<0.001). Logistic regression analysis showed a specificity of 39.6% and sensitivity of 95.5%; while decision tree analysis showed a specificity of 45.8% and sensitivity of 94.7%. In classification accuracy, logistic regression and decision tree analysis showed 87.6% and 88.0% prediction, respectively. Conclusions: Both logistic regression and decision tree analysis were adequate to explain the predictive model. Additionally, whether actual test takers passed the national physical therapy examination could be determined, by applying the constructed prediction model and prediction rate.

텍스트 분류 기법의 발전 (Enhancement of Text Classification Method)

  • 신광성;신성윤
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2019년도 춘계학술대회
    • /
    • pp.155-156
    • /
    • 2019
  • Classification and Regression Tree (CART), SVM (Support Vector Machine) 및 k-nearest neighbor classification (kNN)과 같은 기존 기계 학습 기반 감정 분석 방법은 정확성이 떨어졌습니다. 본 논문에서는 개선 된 kNN 분류 방법을 제안한다. 개선 된 방법 및 데이터 정규화를 통해 정확성 향상의 목적이 달성됩니다. 그 후, 3 가지 분류 알고리즘과 개선 된 알고리즘을 실험 데이터에 기초하여 비교 하였다.

  • PDF

Classification and Regression Tree Analysis for Molecular Descriptor Selection and Binding Affinities Prediction of Imidazobenzodiazepines in Quantitative Structure-Activity Relationship Studies

  • Atabati, Morteza;Zarei, Kobra;Abdinasab, Esmaeil
    • Bulletin of the Korean Chemical Society
    • /
    • 제30권11호
    • /
    • pp.2717-2722
    • /
    • 2009
  • The use of the classification and regression tree (CART) methodology was studied in a quantitative structure-activity relationship (QSAR) context on a data set consisting of the binding affinities of 39 imidazobenzodiazepines for the α1 benzodiazepine receptor. The 3-D structures of these compounds were optimized using HyperChem software with semiempirical AM1 optimization method. After optimization a set of 1481 zero-to three-dimentional descriptors was calculated for each molecule in the data set. The response (dependent variable) in the tree model consisted of the binding affinities of drugs. Three descriptors (two topological and one 3D-Morse descriptors) were applied in the final tree structure to describe the binding affinities. The mean relative error percent for the data set is 3.20%, compared with a previous model with mean relative error percent of 6.63%. To evaluate the predictive power of CART cross validation method was also performed.

Comparison of machine learning algorithms for regression and classification of ultimate load-carrying capacity of steel frames

  • Kim, Seung-Eock;Vu, Quang-Viet;Papazafeiropoulos, George;Kong, Zhengyi;Truong, Viet-Hung
    • Steel and Composite Structures
    • /
    • 제37권2호
    • /
    • pp.193-209
    • /
    • 2020
  • In this paper, the efficiency of five Machine Learning (ML) methods consisting of Deep Learning (DL), Support Vector Machine (SVM), Random Forest (RF), Decision Tree (DT), and Gradient Tree Booting (GTB) for regression and classification of the Ultimate Load Factor (ULF) of nonlinear inelastic steel frames is compared. For this purpose, a two-story, a six-story, and a twenty-story space frame are considered. An advanced nonlinear inelastic analysis is carried out for the steel frames to generate datasets for the training of the considered ML methods. In each dataset, the input variables are the geometric features of W-sections and the output variable is the ULF of the frame. The comparison between the five ML methods is made in terms of the mean-squared-error (MSE) for the regression models and the accuracy for the classification models, respectively. Moreover, the ULF distribution curve is calculated for each frame and the strength failure probability is estimated. It is found that the GTB method has the best efficiency in both regression and classification of ULF regardless of the number of training samples and the space frames considered.

A Combinatorial Optimization for Influential Factor Analysis: a Case Study of Political Preference in Korea

  • Yun, Sung Bum;Yoon, Sanghyun;Heo, Joon
    • 한국측량학회지
    • /
    • 제35권5호
    • /
    • pp.415-422
    • /
    • 2017
  • Finding influential factors from given clustering result is a typical data science problem. Genetic Algorithm based method is proposed to derive influential factors and its performance is compared with two conventional methods, Classification and Regression Tree (CART) and Chi-Squared Automatic Interaction Detection (CHAID), by using Dunn's index measure. To extract the influential factors of preference towards political parties in South Korea, the vote result of $18^{th}$ presidential election and 'Demographic', 'Health and Welfare', 'Economic' and 'Business' related data were used. Based on the analysis, reverse engineering was implemented. Implementation of reverse engineering based approach for influential factor analysis can provide new set of influential variables which can present new insight towards the data mining field.

로지스틱 회귀분석과 의사결정나무 분석을 이용한 일 대도시 주민의 우울 예측요인 비교 연구 (Comparative Analysis of Predictors of Depression for Residents in a Metropolitan City using Logistic Regression and Decision Making Tree)

  • 김수진;김보영
    • 한국콘텐츠학회논문지
    • /
    • 제13권12호
    • /
    • pp.829-839
    • /
    • 2013
  • 본 연구는 로지스틱 회귀분석과 의사결정나무 분석을 활용하여 일 대도시 주민의 우울에 영향을 주는 요인을 예측하고 비교하고자 시도된 서술적 조사연구이다. 연구대상은 20세에서 65세 미만의 일 대도시 주민 462명이었다. 자료 수집은 2011년 10월 7일부터 10월 21일까지이었으며, 자료 분석은 SPSS 18.0 프로그램을 이용하여 빈도, 백분율, 평균과 표준편차 및 ${\chi}^2$-test, t-test, 로지스틱 회귀분석, roc curve, 의사결정나무 분석으로 분석하였다. 본 연구 결과, 로지스틱 회귀분석과 의사결정나무 분석에서 공통적으로 나타난 우울 예측요인은 사회부적응, 주관적 신체증상 및 가족 지지이었다. 로지스틱 회귀분석에서 특이도 93.8%, 민감도 42.5%이었고, 본 연구의 모형 적합도를 roc curve 검증 한 결과 AUC=.84으로 본 연구 모형은 적합(p=<.001)하다고 할 수 있다. 우울예측에 대한 의사결정나무 분석은 분류에 대한 예측 정확도에서 특이도 98.3%, 민감도 20.8%이었고, 전체 분류 정확도는 로지스틱 회귀분석은 82.0%, 의사결정나무 분석은 80.5% 이었다. 본 연구 결과 민감성과 분류 정확도와 더 높게 나타난 로지스틱 회귀분석 방법이 지역 주민의 우울 예측 모형을 구축하는데 더 유용한 자료로 사용될 수 있으리라 사료된다.