• Title/Summary/Keyword: naive bayes

Search Result 238, Processing Time 0.026 seconds

An Efficient Algorithm for NaiveBayes with Matrix Transposition (행렬 전치를 이용한 효율적인 NaiveBayes 알고리즘)

  • Lee, Jae-Moon
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.117-124
    • /
    • 2004
  • This paper proposes an efficient algorithm of NaiveBayes without loss of its accuracy. The proposed method uses the transposition of category vectors, and minimizes the computation of the probability of NaiveBayes. The proposed method was implemented on the existing framework of the text categorization, so called, AI::Categorizer and it was compared with the conventional NaiveBayes with the well-known data, Router-21578. The comparisons show that the proposed method outperforms NaiveBayes about two times with respect to the executing time.

A Study on Incremental Learning Model for Naive Bayes Text Classifier (Naive Bayes 문서 분류기를 위한 점진적 학습 모델 연구)

  • 김제욱;김한준;이상구
    • Proceedings of the Korea Database Society Conference
    • /
    • 2001.06a
    • /
    • pp.331-341
    • /
    • 2001
  • 본 논문에서는 Naive Bayes 문서 분류기를 위한 새로운 학습모델을 제안한다. 이 모델에서는 라벨이 없는 문서들의 집합으로부터 선택한 적은 수의 학습 문서들을 이용하여 문서 분류기를 재학습한다. 본 논문에서는 이러한 학습 방법을 따를 경우 작은 비용으로도 문서 분류기의 정확도가 크게 향상될 수 있다는 사실을 보인다. 이와 같이, 알고리즘을 통해 라벨이 없는 문서들의 집합으로부터 정보량이 큰 문서를 선택한 후, 전문가가 이 문서에 라벨을 부여하는 방식으로 학습문서를 결정하는 것을 selective sampling이라 한다. 본 논문에서는 이러한 selective sampling 문제를 Naive Bayes 문서 분류기에 적용한다. 제안한 학습 방법에서는 라벨이 없는 문서들의 집합으로부터 재학습 문서를 선택하는 기준 측정치로서 평균절대편차(Mean Absolute Deviation), 엔트로피 측정치를 사용한다. 실험을 통해서 제안한 학습 방법이 기존의 방법인 신뢰도(Confidence measure)를 이용한 학습 방법보다 Naive Bayes 문서 분류기의 성능을 더 많이 향상시킨다는 사실을 보인다.

  • PDF

A Novel Posterior Probability Estimation Method for Multi-label Naive Bayes Classification

  • Kim, Hae-Cheon;Lee, Jaesung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.6
    • /
    • pp.1-7
    • /
    • 2018
  • A multi-label classification is to find multiple labels associated with the input pattern. Multi-label classification can be achieved by extending conventional single-label classification. Common extension techniques are known as Binary relevance, Label powerset, and Classifier chains. However, most of the extended multi-label naive bayes classifier has not been able to accurately estimate posterior probabilities because it does not reflect the label dependency. And the remaining extended multi-label naive bayes classifier has a problem that it is unstable to estimate posterior probability according to the label selection order. To estimate posterior probability well, we propose a new posterior probability estimation method that reflects the probability between all labels and labels efficiently. The proposed method reflects the correlation between labels. And we have confirmed through experiments that the extended multi-label naive bayes classifier using the proposed method has higher accuracy then the existing multi-label naive bayes classifiers.

Naive Bayes Learning Algorithm based on Map-Reduce Programming Model (Map-Reduce 프로그래밍 모델 기반의 나이브 베이스 학습 알고리즘)

  • Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.208-209
    • /
    • 2011
  • In this paper, we introduce a Naive Bayes learning algorithm for learning and reasoning in Map-Reduce model based environment. For this purpose, we use Apache Mahout to execute Distributed Naive Bayes on University of California, Irvine (UCI) benchmark data sets. From the experimental results, we see that Apache Mahout' s Distributed Naive Bayes algorithm is comparable to WEKA' s Naive Bayes algorithm in terms of performance. These results indicates that in the future Big Data environment, Map-Reduce model based systems such as Apache Mahout can be promising for machine learning usage.

  • PDF

Naive Bayes classifiers boosted by sufficient dimension reduction: applications to top-k classification

  • Yang, Su Hyeong;Shin, Seung Jun;Sung, Wooseok;Lee, Choon Won
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.5
    • /
    • pp.603-614
    • /
    • 2022
  • The naive Bayes classifier is one of the most straightforward classification tools and directly estimates the class probability. However, because it relies on the independent assumption of the predictor, which is rarely satisfied in real-world problems, its application is limited in practice. In this article, we propose employing sufficient dimension reduction (SDR) to substantially improve the performance of the naive Bayes classifier, which is often deteriorated when the number of predictors is not restrictively small. This is not surprising as SDR reduces the predictor dimension without sacrificing classification information, and predictors in the reduced space are constructed to be uncorrelated. Therefore, SDR leads the naive Bayes to no longer be naive. We applied the proposed naive Bayes classifier after SDR to build a recommendation system for the eyewear-frames based on customers' face shape, demonstrating its utility in the top-k classification problem.

Accurate Intrusion Detection using n-Gram Augmented Naive Bayes (N-Gram 증강 나이브 베이스를 이용한 정확한 침입 탐지)

  • Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.285-288
    • /
    • 2008
  • In many intrusion detection applications, n-gram approach has been widely applied. However, n-gram approach has shown a few problems including double counting of features. To address those problems, we applied n-gram augmented Naive Bayes directly to classify intrusive sequences and compared performance with those of Naive Bayes and Support Vector Machines (SVM) with n-gram features by the experiments on host-based intrusion detection benchmark data sets. Experimental results on the University of New Mexico (UNM) benchmark data sets show that the n-gram augmented method, which solves the problem of independence violation that happens when n-gram features are directly applied to Naive Bayes (i.e. Naive Bayes with n-gram features), yields intrusion detectors with higher accuracy than those from Naive Bayes with n-gram features and shows comparable accuracy to those from SVM with n-gram features.

  • PDF

Mutual Information in Naive Bayes with Kernel Density Estimation (나이브 베이스에서의 커널 밀도 측정과 상호 정보량)

  • Xiang, Zhongliang;Yu, Xiangru;Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.86-88
    • /
    • 2014
  • Naive Bayes (NB) assumption has some harmful effects in classification to the real world data. To relax this assumption, we now propose approach called Naive Bayes Mutual Information Attribute Weighting with Smooth Kernel Density Estimation (NBMIKDE) that combine the smooth kernel for attribute and attribute weighting method based on mutual information measure.

  • PDF

Improving Multinomial Naive Bayes Text Classifier (다항시행접근 단순 베이지안 문서분류기의 개선)

  • 김상범;임해창
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.259-267
    • /
    • 2003
  • Though naive Bayes text classifiers are widely used because of its simplicity, the techniques for improving performances of these classifiers have been rarely studied. In this paper, we propose and evaluate some general and effective techniques for improving performance of the naive Bayes text classifier. We suggest document model based parameter estimation and document length normalization to alleviate the Problems in the traditional multinomial approach for text classification. In addition, Mutual-Information-weighted naive Bayes text classifier is proposed to increase the effect of highly informative words. Our techniques are evaluated on the Reuters21578 and 20 Newsgroups collections, and significant improvements are obtained over the existing multinomial naive Bayes approach.

Breast Cancer Diagnosis using Naive Bayes Analysis Techniques (Naive Bayes 분석기법을 이용한 유방암 진단)

  • Park, Na-Young;Kim, Jang-Il;Jung, Yong-Gyu
    • Journal of Service Research and Studies
    • /
    • v.3 no.1
    • /
    • pp.87-93
    • /
    • 2013
  • Breast cancer is known as a disease that occurs in a lot of developed countries. However, in recent years, the incidence of Korea's modern woman is increased steadily. As well known, breast cancer usually occurs in women over 50. In the case of Korea, however, the incidence of 40s with young women is increased steadily than the West. Therefore, it is a very urgent task to build a manual to the accurate diagnosis of breast cancer in adult women in Korea. In this paper, we show how using data mining techniques to predict breast cancer. Data mining refers to the process of finding regular patterns or relationships among variables within the database. To this, sophisticated analysis using the model, you will find useful information that is easily revealed. In this paper, through experiments Deicion Tree Naive Bayes analysis techniques were compared using analysis techniques to diagnose breast cancer. Two algorithms was analyzed by applying C4.5 algorithm. Deicison Tree classification accuracy was fairly good. Naive Bayes classification method showed better accuracy compared to the Decision Tree method.

  • PDF

Scalable and Accurate Intrusion Detection using n-Gram Augmented Naive Bayes and Generalized k-Truncated Suffix Tree (N-그램 증강 나이브 베이스 알고리즘과 일반화된 k-절단 서픽스트리를 이용한 확장가능하고 정확한 침입 탐지 기법)

  • Kang, Dae-Ki;Hwang, Gi-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.4
    • /
    • pp.805-812
    • /
    • 2009
  • In many intrusion detection applications, n-gram approach has been widely applied. However, n-gram approach has shown a few problems including unscalability and double counting of features. To address those problems, we applied n-gram augmented Naive Bayes with k-truncated suffix tree (k-TST) storage mechanism directly to classify intrusive sequences and compared performance with those of Naive Bayes and Support Vector Machines (SVM) with n-gram features by the experiments on host-based intrusion detection benchmark data sets. Experimental results on the University of New Mexico (UNM) benchmark data sets show that the n-gram augmented method, which solves the problem of independence violation that happens when n-gram features are directly applied to Naive Bayes (i.e. Naive Bayes with n-gram features), yields intrusion detectors with higher accuracy than those from Naive Bayes with n-gram features and shows comparable accuracy to those from SVM with n-gram features. For the scalable and efficient counting of n-gram features, we use k-truncated suffix tree mechanism for storing n-gram features. With the k-truncated suffix tree storage mechanism, we tested the performance of the classifiers up to 20-gram, which illustrates the scalability and accuracy of n-gram augmented Naive Bayes with k-truncated suffix tree storage mechanism.