• Title/Summary/Keyword: kNN classifier

Search Result 101, Processing Time 0.026 seconds

The Study on the Effective Automatic Classification of Internet Document Using the Machine Learning (기계학습을 기반으로 한 인터넷 학술문서의 효과적 자동분류에 관한 연구)

  • 노영희
    • Journal of Korean Library and Information Science Society
    • /
    • v.32 no.3
    • /
    • pp.307-330
    • /
    • 2001
  • This study experimented the performance of categorization methods using the kNN classifier. Most sample based automatic text categorization techniques like the kNN classifier reduces the feature set of the training documents. We sought to find out which percentage reductions in the feature set would result in high performances. In addition, the kNN classifier has to find the k number of training documents most similar to the test documents in the training documents. We sought to verify the most appropriate k value through experiments.

  • PDF

A Study on the Storage Requirement and Incremental Learning of the k-NN Classifier (K_NN 분류기의 메모리 사용과 점진적 학습에 대한 연구)

  • 이형일;윤충화
    • The Journal of Information Technology
    • /
    • v.1 no.1
    • /
    • pp.65-84
    • /
    • 1998
  • The MBR (Memory Based Reasoning) is a supervised learning method that utilizes the distances among the input and trained patterns in its classification, and is also called a distance based learning algorithm. The MBR is based on the k-NN classifier, in which teaming is performed by simply storing training patterns in the memory without any further processing. This paper proposes a new learning algorithm which is more efficient than the traditional k-NN classifier and has incremental learning capability, Furthermore, our proposed algorithm is insensitive to noisy patterns, and guarantees more efficient memory usage.

  • PDF

Academic Registration Text Classification Using Machine Learning

  • Alhawas, Mohammed S;Almurayziq, Tariq S
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.1
    • /
    • pp.93-96
    • /
    • 2022
  • Natural language processing (NLP) is utilized to understand a natural text. Text analysis systems use natural language algorithms to find the meaning of large amounts of text. Text classification represents a basic task of NLP with a wide range of applications such as topic labeling, sentiment analysis, spam detection, and intent detection. The algorithm can transform user's unstructured thoughts into more structured data. In this work, a text classifier has been developed that uses academic admission and registration texts as input, analyzes its content, and then automatically assigns relevant tags such as admission, graduate school, and registration. In this work, the well-known algorithms support vector machine SVM and K-nearest neighbor (kNN) algorithms are used to develop the above-mentioned classifier. The obtained results showed that the SVM classifier outperformed the kNN classifier with an overall accuracy of 98.9%. in addition, the mean absolute error of SVM was 0.0064 while it was 0.0098 for kNN classifier. Based on the obtained results, the SVM is used to implement the academic text classification in this work.

A Study on Feature Selection for kNN Classifier using Document Frequency and Collection Frequency (문헌빈도와 장서빈도를 이용한 kNN 분류기의 자질선정에 관한 연구)

  • Lee, Yong-Gu
    • Journal of Korean Library and Information Science Society
    • /
    • v.44 no.1
    • /
    • pp.27-47
    • /
    • 2013
  • This study investigated the classification performance of a kNN classifier using the feature selection methods based on document frequency(DF) and collection frequency(CF). The results of the experiments, which used HKIB-20000 data, were as follows. First, the feature selection methods that used high-frequency terms and removed low-frequency terms by the CF criterion achieved better classification performance than those using the DF criterion. Second, neither DF nor CF methods performed well when low-frequency terms were selected first in the feature selection process. Last, combining CF and DF criteria did not result in better classification performance than using the single feature selection criterion of DF or CF.

Improving Time Efficiency of kNN Classifier Using Keywords (대표용어를 이용한 kNN 분류기의 처리속도 개선)

  • 이재윤;유수현
    • Proceedings of the Korean Society for Information Management Conference
    • /
    • 2003.08a
    • /
    • pp.65-72
    • /
    • 2003
  • kNN 기법은 높은 자동분류 성능을 보여주지만 처리 속도가 느리다는 단점이 있다. 이를 극복하기 위해 입력문서의 대표용어 w개를 선정하고 이를 포함한 학습문서만으로 학습집단을 축소함으로써 자동분류 속도를 향상시키는 kw_kNN을 제안하였다. 실험 결과 대표 용어를 5개 사용할 경우에는 kNN 대비 문서간 비교횟수를 평균 18.4%로 축소할 수 있었다. 그러면서도 성능저하를 최소화하여 매크로 평균 F1 척도면에서는 차이가 없고 마이크로 평균정확률 면에서는 약 l∼2% 포인트 이내로 kNN 기법의 성능에 근접한 결과를 얻었다.

  • PDF

k-Nearest Neighbor Classifier using Local Values of k (지역적 k값을 사용한 k-Nearest Neighbor Classifier)

  • 이상훈;오경환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10a
    • /
    • pp.193-195
    • /
    • 2003
  • 본 논문에서는 k-Nearest Neighbor(k-NN) 알고리즘을 최적화하기 위해 지역적으로 다른 k(고려할 neighbor의 개수)를 사용하는 새로운 방법을 제안한다. 인스턴스 공간(instance space)에서 노이즈(noise)의 분포가 지역적(local)으로 다를 경우, 각 지점에서 고려해야 할 최적의 이웃 인스턴스(neighbor)의 수는 해당 지점에서의 국부적인 노이즈 분포에 따라 다르다. 그러나 기존의 방법은 전체 인스턴스 공간에 대해 동일한 k를 사용하기 때문에 이러한 인스턴스 공간의 지역적인 특성을 고려하지 못한다. 따라서 본 논문에서는 지역적으로 분포가 다른 노이즈 문제를 해결하기 위해 인스턴스 공간을 여러 개의 부분으로 나누고, 각 부분에 최적화된 k의 값을 사용하여 kNN을 수행하는 새로운 방법인 Local-k Nearest Neighbor 알고리즘(LkNN Algorithm)을 제안한다. LkNN을 통해 생성된 k의 집합은 인스턴스 공간의 각 부분을 대표하는 값으로, 해당 지역의 인스턴스가 고려해야 할 이웃(neighbor)의 수를 결정지어준다. 제안한 알고리즘에 적합한 데이터의 도메인(domain)과 그것의 향상된 성능은 UCI ML Data Repository 데이터를 사용한 실험을 통해 검증하였다.

  • PDF

Using Text Mining Techniques for Intrusion Detection Problem in Computer Network (텍스트 마이닝 기법을 이용한 컴퓨터 네트워크의 침입 탐지)

  • Oh Seung-Joon;Won Min-Kwon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.5 s.37
    • /
    • pp.27-32
    • /
    • 2005
  • Recently there has been much interest in applying data mining to computer network intrusion detection. A new approach, based on the k-Nearest Neighbour(kNN) classifier, is used to classify Program behaviour as normal or intrusive. Each system call is treated as a word and the collection of system calls over each program execution as a document. These documents are then classified using kNN classifier, a Popular method in text mining. A simple example illustrates the proposed procedure.

  • PDF

Optimal k-Nearest Neighborhood Classifier Using Genetic Algorithm (유전알고리즘을 이용한 최적 k-최근접이웃 분류기)

  • Park, Chong-Sun;Huh, Kyun
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.1
    • /
    • pp.17-27
    • /
    • 2010
  • Feature selection and feature weighting are useful techniques for improving the classification accuracy of k-Nearest Neighbor (k-NN) classifier. The main propose of feature selection and feature weighting is to reduce the number of features, by eliminating irrelevant and redundant features, while simultaneously maintaining or enhancing classification accuracy. In this paper, a novel hybrid approach is proposed for simultaneous feature selection, feature weighting and choice of k in k-NN classifier based on Genetic Algorithm. The results have indicated that the proposed algorithm is quite comparable with and superior to existing classifiers with or without feature selection and feature weighting capability.

A Study on Data Classification of Raman OIM Hyperspectral Bone Data

  • Jung, Sung-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.8
    • /
    • pp.1010-1019
    • /
    • 2011
  • This was a preliminary research for the goal of understanding between internal structure of Osteogenesis Imperfecta Murine (OIM) bone and its fragility. 54 hyperspectral bone data sets were captured by using JASCO 2000 Raman spectrometer at UMKC-CRISP (University of Missouri-Kansas City Center for Research on Interfacial Structure and Properties). Each data set consists of 1,091 data points from 9 OIM bones. The original captured hyperspectral data sets were noisy and base-lined ones. We removed the noise and corrected the base-lined data for the final efficient classification. High dimensional Raman hyperspectral data on OIM bones was reduced by Principal Components Analysis (PCA) and Linear Discriminant Analysis (LDA) and efficiently classified for the first time. We confirmed OIM bones could be classified such as strong, middle and weak one by using the coefficients of their PCA or LDA. Through experiment, we investigated the efficiency of classification on the reduced OIM bone data by the Bayesian classifier and K -Nearest Neighbor (K-NN) classifier. As the experimental result, the case of LDA reduction showed higher classification performance than that of PCA reduction in the two classifiers. K-NN classifier represented better classification rate, compared with Bayesian classifier. The classification performance of K-NN was about 92.6% in case of LDA.

A Study on Statistical Feature Selection with Supervised Learning for Word Sense Disambiguation (단어 중의성 해소를 위한 지도학습 방법의 통계적 자질선정에 관한 연구)

  • Lee, Yong-Gu
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.22 no.2
    • /
    • pp.5-25
    • /
    • 2011
  • This study aims to identify the most effective statistical feature selecting method and context window size for word sense disambiguation using supervised methods. In this study, features were selected by four different methods: information gain, document frequency, chi-square, and relevancy. The result of weight comparison showed that identifying the most appropriate features could improve word sense disambiguation performance. Information gain was the highest. SVM classifier was not affected by feature selection and showed better performance in a larger feature set and context size. Naive Bayes classifier was the best performance on 10 percent of feature set size. kNN classifier on under 10 percent of feature set size. When feature selection methods are applied to word sense disambiguation, combinations of a small set of features and larger context window size, or a large set of features and small context windows size can make best performance improvements.