• 제목/요약/키워드: decision tree induction

검색결과 38건 처리시간 0.029초

An Application of Decision Tree Method for Fault Diagnosis of Induction Motors

  • Tran, Van Tung;Yang, Bo-Suk;Oh, Myung-Suck
    • 한국해양공학회:학술대회논문집
    • /
    • 한국해양공학회 2006년 창립20주년기념 정기학술대회 및 국제워크샵
    • /
    • pp.54-59
    • /
    • 2006
  • Decision tree is one of the most effective and widely used methods for building classification model. Researchers from various disciplines such as statistics, machine learning, pattern recognition, and data mining have considered the decision tree method as an effective solution to their field problems. In this paper, an application of decision tree method to classify the faults of induction motors is proposed. The original data from experiment is dealt with feature calculation to get the useful information as attributes. These data are then assigned the classes which are based on our experience before becoming data inputs for decision tree. The total 9 classes are defined. An implementation of decision tree written in Matlab is used for these data.

  • PDF

Applying Decision Tree Algorithms for Analyzing HS-VOSTS Questionnaire Results

  • Kang, Dae-Ki
    • 공학교육연구
    • /
    • 제15권4호
    • /
    • pp.41-47
    • /
    • 2012
  • Data mining and knowledge discovery techniques have shown to be effective in finding hidden underlying rules inside large database in an automated fashion. On the other hand, analyzing, assessing, and applying students' survey data are very important in science and engineering education because of various reasons such as quality improvement, engineering design process, innovative education, etc. Among those surveys, analyzing the students' views on science-technology-society can be helpful to engineering education. Because, although most researches on the philosophy of science have shown that science is one of the most difficult concepts to define precisely, it is still important to have an eye on science, pseudo-science, and scientific misconducts. In this paper, we report the experimental results of applying decision tree induction algorithms for analyzing the questionnaire results of high school students' views on science-technology-society (HS-VOSTS). Empirical results on various settings of decision tree induction on HS-VOSTS results from one South Korean university students indicate that decision tree induction algorithms can be successfully and effectively applied to automated knowledge discovery from students' survey data.

결정목을 이용한 유도전동기 결함진단 (Fault Diagnosis of Induction Motors using Decision Trees)

  • Tran Van Tung;Yang Bo-Suk;Oh Myung-Suck
    • 한국소음진동공학회:학술대회논문집
    • /
    • 한국소음진동공학회 2006년도 추계학술대회논문집
    • /
    • pp.407-410
    • /
    • 2006
  • Decision tree is one of the most effective and widely used methods for building classification model. Researchers from various disciplines such as statistics, machine teaming, pattern recognition, and data mining have considered the decision tree method as an effective solution to their field problems. In this paper, an application of decision tree method to classify the faults of induction motors is proposed. The original data from experiment is dealt with feature calculation to get the useful information as attributes. These data are then assigned the classes which are based on our experience before becoming data inputs for decision tree. The total 9 classes are defined. An implementation of decision tree written in Matlab is used for four data sets with good performance results

  • PDF

순차적으로 선택된 특성과 유전 프로그래밍을 이용한 결정나무 (A Decision Tree Induction using Genetic Programming with Sequentially Selected Features)

  • 김효중;박종선
    • 경영과학
    • /
    • 제23권1호
    • /
    • pp.63-74
    • /
    • 2006
  • Decision tree induction algorithm is one of the most widely used methods in classification problems. However, they could be trapped into a local minimum and have no reasonable means to escape from it if tree algorithm uses top-down search algorithm. Further, if irrelevant or redundant features are included in the data set, tree algorithms produces trees that are less accurate than those from the data set with only relevant features. We propose a hybrid algorithm to generate decision tree that uses genetic programming with sequentially selected features. Correlation-based Feature Selection (CFS) method is adopted to find relevant features which are fed to genetic programming sequentially to find optimal trees at each iteration. The new proposed algorithm produce simpler and more understandable decision trees as compared with other decision trees and it is also effective in producing similar or better trees with relatively smaller set of features in the view of cross-validation accuracy.

Fuzzy Classification Rule Learning by Decision Tree Induction

  • Lee, Keon-Myung;Kim, Hak-Joon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제3권1호
    • /
    • pp.44-51
    • /
    • 2003
  • Knowledge acquisition is a bottleneck in knowledge-based system implementation. Decision tree induction is a useful machine learning approach for extracting classification knowledge from a set of training examples. Many real-world data contain fuzziness due to observation error, uncertainty, subjective judgement, and so on. To cope with this problem of real-world data, there have been some works on fuzzy classification rule learning. This paper makes a survey for the kinds of fuzzy classification rules. In addition, it presents a fuzzy classification rule learning method based on decision tree induction, and shows some experiment results for the method.

특징공간을 사선 분할하는 퍼지 결정트리 유도 (Fuaay Decision Tree Induction to Obliquely Partitioning a Feature Space)

  • 이우향;이건명
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제29권3호
    • /
    • pp.156-166
    • /
    • 2002
  • 결정트리 생성은 특징값들로 기술된 사례들로부터 분류 규칙을 추출하는 유용한 기계학습 방법중 하나이다. 결정트리는 특징공간을 분할하는 형태에 따라 단변수(univariate) 결정트리와 다변수(multivariate) 결정트리로 대별된다. 실제 현장에서 얻어지는 데이터는 관측오류, 불확실성, 주관적인 판단 등의 이유로 특징값 자체에 오류를 포함하는 경우가 많다. 이러한 오류에 대해 강건한 결정트리를 생성하기 위한 방법으로 퍼지 기법을 도입한 결정트리 생성 방법에 대한 연구가 진행되어 왔다. 현재까지 대부분의 퍼지 결정트리에 대한 연구는 단변수 결정트리에 퍼지 기법을 도입한 것들이며, 다변수 결정트리에 퍼지 기법을 적용한 것은 찾아보기 힘들다. 이 논문에서는 다변수 결정트리에 퍼지 기법을 적용하여 퍼지사선형 결정트리라고 하는 퍼지 결정트리를 생성하는 방법을 제안한다. 또한 제안한 결정트리 생성 방법의 특성을 보이기 위한 실험 결과를 보인다.

A Comparative Study of Medical Data Classification Methods Based on Decision Tree and System Reconstruction Analysis

  • Tang, Tzung-I;Zheng, Gang;Huang, Yalou;Shu, Guangfu;Wang, Pengtao
    • Industrial Engineering and Management Systems
    • /
    • 제4권1호
    • /
    • pp.102-108
    • /
    • 2005
  • This paper studies medical data classification methods, comparing decision tree and system reconstruction analysis as applied to heart disease medical data mining. The data we study is collected from patients with coronary heart disease. It has 1,723 records of 71 attributes each. We use the system-reconstruction method to weight it. We use decision tree algorithms, such as induction of decision trees (ID3), classification and regression tree (C4.5), classification and regression tree (CART), Chi-square automatic interaction detector (CHAID), and exhausted CHAID. We use the results to compare the correction rate, leaf number, and tree depth of different decision-tree algorithms. According to the experiments, we know that weighted data can improve the correction rate of coronary heart disease data but has little effect on the tree depth and leaf number.

웹 마이닝과 의사결정나무 기법을 활용한 개인별 상품추천 방법 (A personalized recommendation methodology using web usage mining and decision tree induction)

  • 조윤호;김재경
    • 한국지능정보시스템학회:학술대회논문집
    • /
    • 한국지능정보시스템학회 2002년도 춘계학술대회 논문집
    • /
    • pp.342-351
    • /
    • 2002
  • A personalized product recommendation is an enabling mechanism to overcome information overload occurred when shopping in an Internet marketplace. Collaborative filtering has been known to be one of the most successful recommendation methods, but its application to e-commerce has exposed well-known limitations such as sparsity and scalability, which would lead to poor recommendations. This paper suggests a personalized recommendation methodology by which we are able to get further effectiveness and quality of recommendations when applied to an Internet shopping mall. The suggested methodology is based on a variety of data mining techniques such as web usage mining, decision tree induction, association rule mining and the product taxonomy. For the evaluation of the methodology, we implement a recommender system using intelligent agent and data warehousing technologies.

  • PDF

사상체질 임상자료 기반 의사결정나무 생성 알고리즘 비교 (Comparison among Algorithms for Decision Tree based on Sasang Constitutional Clinical Data)

  • 진희정;이수경;이시우
    • 한국한의학연구원논문집
    • /
    • 제17권2호
    • /
    • pp.121-127
    • /
    • 2011
  • Objectives : In the clinical field, it is important to understand the factors that have effects on a certain disease or symptom. For this, many researchers apply Data Mining method to the clinical data that they have collected. One of the efficient methods for Data Mining is decision tree induction. Many researchers have studied to find the best split criteria of decision tree; however, various split criteria coexist. Methods : In this paper, we applied several split criteria(Information Gain, Gini Index, Chi-Square) to Sasang constitutional clinical information and compared each decision tree in order to find optimal split criteria. Results & Conclusion : We found BMI and body measurement factors are important factors to Sasang constitution by analyzing produced decision trees with different split measures. And the decision tree using information gain had the highest accuracy. However, the decision tree that produced highest accuracy is changed depending on given data. So, researcher have to try to find proper split criteria for given data by understanding attribute of the given data.

Optimization of Decision Tree for Classification Using a Particle Swarm

  • Cho, Yun-Ju;Lee, Hye-Seon;Jun, Chi-Hyuck
    • Industrial Engineering and Management Systems
    • /
    • 제10권4호
    • /
    • pp.272-278
    • /
    • 2011
  • Decision tree as a classification tool is being used successfully in many areas such as medical diagnosis, customer churn prediction, signal detection and so on. The main advantage of decision tree classifiers is their capability to break down a complex structure into a collection of simpler structures, thus providing a solution that is easy to interpret. Since decision tree is a top-down algorithm using a divide and conquer induction process, there is a risk of reaching a local optimal solution. This paper proposes a procedure of optimally determining thresholds of the chosen variables for a decision tree using an adaptive particle swarm optimization (APSO). The proposed algorithm consists of two phases. First, we construct a decision tree and choose the relevant variables. Second, we find the optimum thresholds simultaneously using an APSO for those selected variables. To validate the proposed algorithm, several artificial and real datasets are used. We compare our results with the original CART results and show that the proposed algorithm is promising for improving prediction accuracy.