• Title/Summary/Keyword: 나이브베이스

Search Result 18, Processing Time 0.027 seconds

Enhancing Red Tides Prediction using Fuzzy Reasoning and Naive Bayes Classifier (나이브베이스 분류자와 퍼지 추론을 이용한 적조 발생 예측의 성능향상)

  • Park, Sun;Lee, Seong-Ro
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.9
    • /
    • pp.1881-1888
    • /
    • 2011
  • Red tide is a natural phenomenon to bloom harmful algal, which fish and shellfish die en masse. Red tide damage with respect to sea farming has been occurred each year. Red tide damage can be minimized by means of prediction of red tide blooms. Red tide prediction using naive bayes classifier can be achieve good prediction results. The result of naive bayes method only determine red tide blooms, whereas the method can not know how increasing of red tide algae density. In this paper, we proposed the red tide blooms prediction method using fuzzy reasoning and naive bayes classifier. The proposed method can enhance the precision of red tide prediction and forecast the increasing density of red tide algae.

Accurate Intrusion Detection using n-Gram Augmented Naive Bayes (N-Gram 증강 나이브 베이스를 이용한 정확한 침입 탐지)

  • Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.285-288
    • /
    • 2008
  • In many intrusion detection applications, n-gram approach has been widely applied. However, n-gram approach has shown a few problems including double counting of features. To address those problems, we applied n-gram augmented Naive Bayes directly to classify intrusive sequences and compared performance with those of Naive Bayes and Support Vector Machines (SVM) with n-gram features by the experiments on host-based intrusion detection benchmark data sets. Experimental results on the University of New Mexico (UNM) benchmark data sets show that the n-gram augmented method, which solves the problem of independence violation that happens when n-gram features are directly applied to Naive Bayes (i.e. Naive Bayes with n-gram features), yields intrusion detectors with higher accuracy than those from Naive Bayes with n-gram features and shows comparable accuracy to those from SVM with n-gram features.

  • PDF

Naive Bayes Learning Algorithm based on Map-Reduce Programming Model (Map-Reduce 프로그래밍 모델 기반의 나이브 베이스 학습 알고리즘)

  • Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.208-209
    • /
    • 2011
  • In this paper, we introduce a Naive Bayes learning algorithm for learning and reasoning in Map-Reduce model based environment. For this purpose, we use Apache Mahout to execute Distributed Naive Bayes on University of California, Irvine (UCI) benchmark data sets. From the experimental results, we see that Apache Mahout' s Distributed Naive Bayes algorithm is comparable to WEKA' s Naive Bayes algorithm in terms of performance. These results indicates that in the future Big Data environment, Map-Reduce model based systems such as Apache Mahout can be promising for machine learning usage.

  • PDF

Scalable and Accurate Intrusion Detection using n-Gram Augmented Naive Bayes and Generalized k-Truncated Suffix Tree (N-그램 증강 나이브 베이스 알고리즘과 일반화된 k-절단 서픽스트리를 이용한 확장가능하고 정확한 침입 탐지 기법)

  • Kang, Dae-Ki;Hwang, Gi-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.4
    • /
    • pp.805-812
    • /
    • 2009
  • In many intrusion detection applications, n-gram approach has been widely applied. However, n-gram approach has shown a few problems including unscalability and double counting of features. To address those problems, we applied n-gram augmented Naive Bayes with k-truncated suffix tree (k-TST) storage mechanism directly to classify intrusive sequences and compared performance with those of Naive Bayes and Support Vector Machines (SVM) with n-gram features by the experiments on host-based intrusion detection benchmark data sets. Experimental results on the University of New Mexico (UNM) benchmark data sets show that the n-gram augmented method, which solves the problem of independence violation that happens when n-gram features are directly applied to Naive Bayes (i.e. Naive Bayes with n-gram features), yields intrusion detectors with higher accuracy than those from Naive Bayes with n-gram features and shows comparable accuracy to those from SVM with n-gram features. For the scalable and efficient counting of n-gram features, we use k-truncated suffix tree mechanism for storing n-gram features. With the k-truncated suffix tree storage mechanism, we tested the performance of the classifiers up to 20-gram, which illustrates the scalability and accuracy of n-gram augmented Naive Bayes with k-truncated suffix tree storage mechanism.

Naive Bayes Approach in Kernel Density Estimation (커널 밀도 측정에서의 나이브 베이스 접근 방법)

  • Xiang, Zhongliang;Yu, Xiangru;Al-Absi, Ahmed Abdulhakim;Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.76-78
    • /
    • 2014
  • Naive Bayes (NB, for shortly) learning is more popular, faster and effective supervised learning method to handle the labeled datasets especially in which have some noises, NB learning also has well performance. However, the conditional independent assumption of NB learning imposes some restriction on the property of handling data of real world. Some researchers proposed lots of methods to relax NB assumption, those methods also include attribute weighting, kernel density estimating. In this paper, we propose a novel approach called NB Based on Attribute Weighting in Kernel Density Estimation (NBAWKDE) to improve the NB learning classification ability via combining kernel density estimation and attribute weighting.

  • PDF

Analysis of high school students' views on science-technology-society (HS-VOSTS) questionnaire results (고등학생을 위한 과학-기술-사회에 대한 시각 (HS-VOST) 설문조사 결과 분석)

  • Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.201-203
    • /
    • 2011
  • We report an experimental result of applying a data mining algorithm for analyzing the questionnaire results of high school students' views on science-technology-society (HS-VOSTS). The preliminary empirical result of Naive Bayes classifier on HS-VOSTS questionnaire from one South Korean university students indicates that data mining algorithms can be effectively applied to automated knowledge discovery from students' survey data.

  • PDF

Naive Bayes Learner for Propositionalized Attribute Taxonomy (명제화된 어트리뷰트 택소노미를 이용하는 나이브 베이스 학습 알고리즘)

  • Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.406-409
    • /
    • 2008
  • We consider the problem of exploiting a taxonomy of propositionalized attributes in order to learn compact and robust classifiers. We introduce Propositionalized Attribute Taxonomy guided Naive Bayes Learner (PAT-NBL), an inductive learning algorithm that exploits a taxonomy of propositionalized attributes as prior knowledge to generate compact and accurate classifiers. PAT-NBL uses top-down and bottom-up search to find a locally optimal cut that corresponds to the instance space from propositionalized attribute taxonomy and data. Our experimental results on University of California-Irvine (UCI) repository data sets show that the proposed algorithm can generate a classifier that is sometimes comparably compact and accurate to those produced by standard Naive Bayes learners.

  • PDF

Propositionalized Attribute Taxonomy Guided Naive Bayes Learning Algorithm (명제화된 어트리뷰트 택소노미를 이용하는 나이브 베이스 학습 알고리즘)

  • Kang, Dae-Ki;Cha, Kyung-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.12
    • /
    • pp.2357-2364
    • /
    • 2008
  • In this paper, we consider the problem of exploiting a taxonomy of propositionalized attributes in order to generate compact and robust classifiers. We introduce Propositionalized Attribute Taxonomy guided Naive Bayes Learner (PAT-NBL), an inductive learning algorithm that exploits a taxonomy of propositionalized attributes as prior knowledge to generate compact and accurate classifiers. PAT-NBL uses top-down and bottom-up search to find a locally optimal cut that corresponds to the instance space from propositionalized attribute taxonomy and data. Our experimental results on University of California-Irvine (UCI) repository data set, show that the proposed algorithm can generate a classifier that is sometimes comparably compact and accurate to those produced by standard Naive Bayes learners.

Mutual Information in Naive Bayes with Kernel Density Estimation (나이브 베이스에서의 커널 밀도 측정과 상호 정보량)

  • Xiang, Zhongliang;Yu, Xiangru;Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.86-88
    • /
    • 2014
  • Naive Bayes (NB) assumption has some harmful effects in classification to the real world data. To relax this assumption, we now propose approach called Naive Bayes Mutual Information Attribute Weighting with Smooth Kernel Density Estimation (NBMIKDE) that combine the smooth kernel for attribute and attribute weighting method based on mutual information measure.

  • PDF

An Information-theoretic Approach for Value-Based Weighting in Naive Bayesian Learning (나이브 베이시안 학습에서 정보이론 기반의 속성값 가중치 계산방법)

  • Lee, Chang-Hwan
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.285-291
    • /
    • 2010
  • In this paper, we propose a new paradigm of weighting methods for naive Bayesian learning. We propose more fine-grained weighting methods, called value weighting method, in the context of naive Bayesian learning. While the current weighting methods assign a weight to an attribute, we assign a weight to an attribute value. We develop new methods, using Kullback-Leibler function, for both value weighting and feature weighting in the context of naive Bayesian. The performance of the proposed methods has been compared with the attribute weighting method and general naive bayesian. The proposed method shows better performance in most of the cases.