• Title/Summary/Keyword: decision tree

Search Result 1,658, Processing Time 0.023 seconds

Fuzzy Classification Rule Learning by Decision Tree Induction

  • Lee, Keon-Myung;Kim, Hak-Joon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.3 no.1
    • /
    • pp.44-51
    • /
    • 2003
  • Knowledge acquisition is a bottleneck in knowledge-based system implementation. Decision tree induction is a useful machine learning approach for extracting classification knowledge from a set of training examples. Many real-world data contain fuzziness due to observation error, uncertainty, subjective judgement, and so on. To cope with this problem of real-world data, there have been some works on fuzzy classification rule learning. This paper makes a survey for the kinds of fuzzy classification rules. In addition, it presents a fuzzy classification rule learning method based on decision tree induction, and shows some experiment results for the method.

A Study on the Design of Binary Decision Tree using FCM algorithm (FCM 알고리즘을 이용한 이진 결정 트리의 구성에 관한 연구)

  • 정순원;박중조;김경민;박귀태
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.11
    • /
    • pp.1536-1544
    • /
    • 1995
  • We propose a design scheme of a binary decision tree and apply it to the tire tread pattern recognition problem. In this scheme, a binary decision tree is constructed by using fuzzy C-means( FCM ) algorithm. All the available features are used while clustering. At each node, the best feature or feature subset among these available features is selected based on proposed similarity measure. The decision tree can be used for the classification of unknown patterns. The proposed design scheme is applied to the tire tread pattern recognition problem. The design procedure including feature extraction is described. Experimental results are given to show the usefulness of this scheme.

  • PDF

Customer Relationship Management System using Decision Tree (Decision Tree를 이용한 고객 취향 관리 시스템)

  • Choi, Jong-Hoon;Lee, Eun;Kong, Eun-Bae
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10b
    • /
    • pp.60-62
    • /
    • 2000
  • 인터넷의 활성화로 많은 사람들이 인터넷을 이용하고 이에 따라 인터넷을 이용한 서비스도 홍수를 이루고 있다. 이에 따라 인터넷을 상업적 목적으로 사용하는 서비스도 증가하고 있다. 그러나 많은 인터넷 서비스들이 고객들에게 획일적이고 일률적인 서비스만을 제공한다. 각각의 고객에게 취향과 관심분야에 따른 차별화 된 서비스가 필요로 한다. 각 고객에게 1대 1로 차별화 된 service를 제공하기 위해서 먼저 각 고객을 구별하고 그 고객의 취향과 관심분야의 파악을 위해서 인터넷에서의 행동을 관찰한다. 또한 고객의 관리를 위해 고객을 필요에 따라 그룹화하고, 고객과 직접 접촉을 통해 고객 정보를 파악할 수도 있다. 파악된 고객 정보의 효율적 저장과 분석을 위해서 decision tree를 이용해 학습을 한다. 고객의 행동의 특성상 incremental한 학습 알고리즘을 사용하며 고객의 선호도를 이용한 decision tree를 이용한다. 학습된 결과를 이용해서 1대 1 서비스를 제공함으로써 고객에서 편리성을 제공하고 서비스에 대한 친밀감과 고객의 흥미를 유발할 수 있다.

  • PDF

Rule Selection Method in Decision Tree Models (의사결정나무 모델에서의 중요 룰 선택기법)

  • Son, Jieun;Kim, Seoung Bum
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.40 no.4
    • /
    • pp.375-381
    • /
    • 2014
  • Data mining is a process of discovering useful patterns or information from large amount of data. Decision tree is one of the data mining algorithms that can be used for both classification and prediction and has been widely used for various applications because of its flexibility and interpretability. Decision trees for classification generally generate a number of rules that belong to one of the predefined category and some rules may belong to the same category. In this case, it is necessary to determine the significance of each rule so as to provide the priority of the rule with users. The purpose of this paper is to propose a rule selection method in classification tree models that accommodate the umber of observation, accuracy, and effectiveness in each rule. Our experiments demonstrate that the proposed method produce better performance compared to other existing rule selection methods.

A Case Study on segmentation of Department Store using Decision Tree Analysis (의사결정나무 기법을 활용한 백화점의 고객세분화 사례연구)

  • Chae, Kyung-Hee;Kim, Sang-Cheol
    • Journal of Distribution Science
    • /
    • v.8 no.1
    • /
    • pp.13-19
    • /
    • 2010
  • Segmentation, targeting, and positioning are marketing tools used by a company to gain competitive advantage in the market. For an accurate segmentation, various statistics models or datamining techniques are used. Especially, datamining techniques are introduced in the beginning of the 1980s and solved several marketing problems effectively. In this paper, we research about datamining technique for segmentation and analyze customer's transaction data of Department Store using Decision Tree Analysis, one of the dataming technique. After that, we discuss effects and advantages of segmentation using Decision Tree.

  • PDF

Color Information Based Psychology Analysis Using Decision Tree (의사 결정 트리를 이용한 색채 정보 기반 심리 분석)

  • Nam, Ji-Hyo;Lee, Min-Jung;Oh, Heung-Min;Kim, Kwang Baek
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.514-516
    • /
    • 2016
  • 사람은 개인마다 선호색이 다르다. 때문에 색채를 통해서 개인의 성향을 분석하기도 한다. 일반적으로 난색은 밝고 따뜻한 색으로 활기와 적극성을 띄며 한색은 차갑고 냉정함, 차분함 등과 같은 의미를 지닌다. 이러한 색채가 가지는 의미는 개인의 환경, 성향, 성별, 연령 등에 따라 다르게 나타난다. 색채 선호는 일반적으로 개인이 색채에 대해 좋아하는 정도를 의미하는 것으로 개인의 성향이나 상황, 경험 등에 의해 형성된 지극히 개인적인 색을 말한다. 본 논문에서는 색채 선호를 분석하는 심리 검사 CRR와 Flood Fill 알고리즘을 적용하여 그림에 색채를 채워서 주조색과, 보조색을 각각 Decision Tree에 적용한다. Decision Tree의 결과를 기반으로 데이터베이스와 연동하여 개인의 심리 상태를 분석할 수 있는 방법을 제안한다.

  • PDF

Improved Decision Tree Classification (IDT) Algorithm For Social Media Data

  • Anu Sharma;M.K Sharma;R.K Dwivedi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.83-88
    • /
    • 2024
  • In this paper we used classification algorithms on social networking. We are proposing, a new classification algorithm called the improved Decision Tree (IDT). Our model provides better classification accuracy than the existing systems for classifying the social network data. Here we examined the performance of some familiar classification algorithms regarding their accuracy with our proposed algorithm. We used Support Vector Machines, Naïve Bayes, k-Nearest Neighbors, decision tree in our research and performed analyses on social media dataset. Matlab is used for performing experiments. The result shows that the proposed algorithm achieves the best results with an accuracy of 84.66%.

A Study on the Improvement of Injection Molding Process Using CAE and Decision-tree (CAE와 Decision-tree를 이용한 사출성형 공정개선에 관한 연구)

  • Hwang, Soonhwan;Han, Seong-Ryeol;Lee, Hoojin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.580-586
    • /
    • 2021
  • The CAT methodology is a numerical analysis technique using CAE. Recently, a methodology of applying artificial intelligence techniques to a simulation has been studied. A previous study compared the deformation results according to the injection molding process using a machine learning technique. Although MLP has excellent prediction performance, it lacks an explanation of the decision process and is like a black box. In this study, data was generated using Autodesk Moldflow 2018, an injection molding analysis software. Several Machine Learning Algorithms models were developed using RapidMiner version 9.5, a machine learning platform software, and the root mean square error was compared. The decision-tree showed better prediction performance than other machine learning techniques with the RMSE values. The classification criterion can be increased according to the Maximal Depth that determines the size of the Decision-tree, but the complexity also increases. The simulation showed that by selecting an intermediate value that satisfies the constraint based on the changed position, there was 7.7% improvement compared to the previous simulation.

Interpretability Comparison of Popular Decision Tree Algorithms (대표적인 의사결정나무 알고리즘의 해석력 비교)

  • Hong, Jung-Sik;Hwang, Geun-Seong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.2
    • /
    • pp.15-23
    • /
    • 2021
  • Most of the open-source decision tree algorithms are based on three splitting criteria (Entropy, Gini Index, and Gain Ratio). Therefore, the advantages and disadvantages of these three popular algorithms need to be studied more thoroughly. Comparisons of the three algorithms were mainly performed with respect to the predictive performance. In this work, we conducted a comparative experiment on the splitting criteria of three decision trees, focusing on their interpretability. Depth, homogeneity, coverage, lift, and stability were used as indicators for measuring interpretability. To measure the stability of decision trees, we present a measure of the stability of the root node and the stability of the dominating rules based on a measure of the similarity of trees. Based on 10 data collected from UCI and Kaggle, we compare the interpretability of DT (Decision Tree) algorithms based on three splitting criteria. The results show that the GR (Gain Ratio) branch-based DT algorithm performs well in terms of lift and homogeneity, while the GINI (Gini Index) and ENT (Entropy) branch-based DT algorithms performs well in terms of coverage. With respect to stability, considering both the similarity of the dominating rule or the similarity of the root node, the DT algorithm according to the ENT splitting criterion shows the best results.

Case Study of CRM Application Using Improvement Method of Fuzzy Decision Tree Analysis (퍼지의사결정나무 개선방법을 이용한 CRM 적용 사례)

  • Yang, Seung-Jeong;Rhee, Jong-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.8
    • /
    • pp.13-20
    • /
    • 2007
  • Decision tree is one of the most useful analysis methods for various data mining functions, including prediction, classification, etc, from massive data. Decision tree grows by splitting nodes, during which the purity increases. It is needed to stop splitting nodes when the purity does not increase effectively or new leaves does not contain meaningful number of records. Pruning is done if a branch does not show certain level of performance. By pruning, the structure of decision tree is changed and it is implied that the previous splitting of the parent node was not effective. It is also implied that the splitting of the ancestor nodes were not effective and the choices of attributes and criteria in splitting them were not successful. It should be noticed that new attributes or criteria might be selected to split such nodes for better tries. In this paper, we suggest a procedure to modify decision tree by Fuzzy theory and splitting as an integrated approach.