• Title/Summary/Keyword: k-NN classification

Search Result 188, Processing Time 0.029 seconds

A Study on Feature Selection for kNN Classifier using Document Frequency and Collection Frequency (문헌빈도와 장서빈도를 이용한 kNN 분류기의 자질선정에 관한 연구)

  • Lee, Yong-Gu
    • Journal of Korean Library and Information Science Society
    • /
    • v.44 no.1
    • /
    • pp.27-47
    • /
    • 2013
  • This study investigated the classification performance of a kNN classifier using the feature selection methods based on document frequency(DF) and collection frequency(CF). The results of the experiments, which used HKIB-20000 data, were as follows. First, the feature selection methods that used high-frequency terms and removed low-frequency terms by the CF criterion achieved better classification performance than those using the DF criterion. Second, neither DF nor CF methods performed well when low-frequency terms were selected first in the feature selection process. Last, combining CF and DF criteria did not result in better classification performance than using the single feature selection criterion of DF or CF.

An Improved Text Classification Method for Sentiment Classification

  • Wang, Guangxing;Shin, Seong Yoon
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.1
    • /
    • pp.41-48
    • /
    • 2019
  • In recent years, sentiment analysis research has become popular. The research results of sentiment analysis have achieved remarkable results in practical applications, such as in Amazon's book recommendation system and the North American movie box office evaluation system. Analyzing big data based on user preferences and evaluations and recommending hot-selling books and hot-rated movies to users in a targeted manner greatly improve book sales and attendance rate in movies [1, 2]. However, traditional machine learning-based sentiment analysis methods such as the Classification and Regression Tree (CART), Support Vector Machine (SVM), and k-nearest neighbor classification (kNN) had performed poorly in accuracy. In this paper, an improved kNN classification method is proposed. Through the improved method and normalizing of data, the purpose of improving accuracy is achieved. Subsequently, the three classification algorithms and the improved algorithm were compared based on experimental data. Experiments show that the improved method performs best in the kNN classification method, with an accuracy rate of 11.5% and a precision rate of 20.3%.

An Empirical Study on Improving the Performance of Text Categorization Considering the Relationships between Feature Selection Criteria and Weighting Methods (자질 선정 기준과 가중치 할당 방식간의 관계를 고려한 문서 자동분류의 개선에 대한 연구)

  • Lee Jae-Yun
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.39 no.2
    • /
    • pp.123-146
    • /
    • 2005
  • This study aims to find consistent strategies for feature selection and feature weighting methods, which can improve the effectiveness and efficiency of kNN text classifier. Feature selection criteria and feature weighting methods are as important factor as classification algorithms to achieve good performance of text categorization systems. Most of the former studies chose conflicting strategies for feature selection criteria and weighting methods. In this study, the performance of several feature selection criteria are measured considering the storage space for inverted index records and the classification time. The classification experiments in this study are conducted to examine the performance of IDF as feature selection criteria and the performance of conventional feature selection criteria, e.g. mutual information, as feature weighting methods. The results of these experiments suggest that using those measures which prefer low-frequency features as feature selection criterion and also as feature weighting method. we can increase the classification speed up to three or five times without loosing classification accuracy.

OHC Algorithm for RPA Memory Based Reasoning (RPA분류기의 성능 향상을 위한 OHC알고리즘)

  • 이형일
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.5
    • /
    • pp.824-830
    • /
    • 2003
  • RPA (Recursive Partition Averaging) method was proposed in order to improve the storage requirement and classification rate of the Memory Based Reasoning. That algorithm worked well in many areas, however, the major drawbacks of RPA are it's pattern averaging mechanism. We propose an adaptive OHC algorithm which uses the FPD(Feature-based Population Densimeter) to increase the classification rate of RPA. The proposed algorithm required only approximately 40% of memory space that is needed in k-NN classifier, and showed a superior classification performance to the RPA. Also, by reducing the number of stored patterns, it showed a excellent results in terms of classification when we compare it to the k-NN.

  • PDF

Load Fidelity Improvement of Piecewise Integrated Composite Beam by Construction Training Data of k-NN Classification Model (k-NN 분류 모델의 학습 데이터 구성에 따른 PIC 보의 하중 충실도 향상에 관한 연구)

  • Ham, Seok Woo;Cheon, Seong S.
    • Composites Research
    • /
    • v.33 no.3
    • /
    • pp.108-114
    • /
    • 2020
  • Piecewise Integrated Composite (PIC) beam is composed of different stacking against loading type depending upon location. The aim of current study is to assign robust stacking sequences against external loading to every corresponding part of the PIC beam based on the value of stress triaxiality at generated reference points using the k-NN (k-Nearest Neighbor) classification, which is one of representative machine learning techniques, in order to excellent superior bending characteristics. The stress triaxiality at reference points is obtained by three-point bending analysis of the Al beam with training data categorizing the type of external loading, i.e., tension, compression or shear. Loading types of each plane of the beam were classified by independent plane scheme as well as total beam scheme. Also, loading fidelities were calibrated for each case with the variation of hyper-parameters. Most effective stacking sequences were mapped into the PIC beam based on the k-NN classification model with the highest loading fidelity. FE analysis result shows the PIC beam has superior external loading resistance and energy absorption compared to conventional beam.

Performance Comparison of Classification Algorithms in Music Recognition using Violin and Cello Sound Files (바이올린과 첼로 연주 데이터를 이용한 분류 알고리즘의 성능 비교)

  • Kim Jae Chun;Kwak Kyung sup
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.5C
    • /
    • pp.305-312
    • /
    • 2005
  • Three classification algorithms are tested using musical instruments. Several classification algorithms are introduced and among them, Bayes rule, NN and k-NN performances evaluated. ZCR, mean, variance and average peak level feature vectors are extracted from instruments sample file and used as data set to classification system. Used musical instruments are Violin, baroque violin and baroque cello. Results of experiment show that the performance of NN algorithm excels other algorithms in musical instruments classification.

Academic Registration Text Classification Using Machine Learning

  • Alhawas, Mohammed S;Almurayziq, Tariq S
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.1
    • /
    • pp.93-96
    • /
    • 2022
  • Natural language processing (NLP) is utilized to understand a natural text. Text analysis systems use natural language algorithms to find the meaning of large amounts of text. Text classification represents a basic task of NLP with a wide range of applications such as topic labeling, sentiment analysis, spam detection, and intent detection. The algorithm can transform user's unstructured thoughts into more structured data. In this work, a text classifier has been developed that uses academic admission and registration texts as input, analyzes its content, and then automatically assigns relevant tags such as admission, graduate school, and registration. In this work, the well-known algorithms support vector machine SVM and K-nearest neighbor (kNN) algorithms are used to develop the above-mentioned classifier. The obtained results showed that the SVM classifier outperformed the kNN classifier with an overall accuracy of 98.9%. in addition, the mean absolute error of SVM was 0.0064 while it was 0.0098 for kNN classifier. Based on the obtained results, the SVM is used to implement the academic text classification in this work.

An Improved Text Classification (향상된 텍스트 분류)

  • Wang, Guangxing;Shin, Seong-Yoon;Shin, Kwang-Weong;Lee, Hyun-Chang
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.125-126
    • /
    • 2019
  • In this paper, we propose an improved kNN classification method. Through improved the mothed and normalizing the data, the purpose of improving the accuracy is achieved. Then we compared the three classification algorithms and the improved algorithm by experimental data.

  • PDF

A Study on Book Categorization in Social Sciences Using kNN Classifiers and Table of Contents Text (목차 정보와 kNN 분류기를 이용한 사회과학 분야 도서 자동 분류에 관한 연구)

  • Lee, Yong-Gu
    • Journal of the Korean Society for information Management
    • /
    • v.37 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • This study applied automatic classification using table of contents (TOC) text for 6,253 social science books from a newly arrived list collected by a university library. The k-nearest neighbors (kNN) algorithm was used as a classifier, and the ten divisions on the second level of the DDC's main class 300 given to books by the library were used as classes (labels). The features used in this study were keywords extracted from titles and TOCs of the books. The TOCs were obtained through the OpenAPI from an Internet bookstore. As a result, it was found that the TOC features were good for improving both classification recall and precision. The TOC was shown to reduce the overfitting problem of imbalanced data with its rich features. Law and education have high topic specificity in the field of social sciences, so the only title features can bring good classification performance in these fields.

A Study on Data Classification of Raman OIM Hyperspectral Bone Data

  • Jung, Sung-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.8
    • /
    • pp.1010-1019
    • /
    • 2011
  • This was a preliminary research for the goal of understanding between internal structure of Osteogenesis Imperfecta Murine (OIM) bone and its fragility. 54 hyperspectral bone data sets were captured by using JASCO 2000 Raman spectrometer at UMKC-CRISP (University of Missouri-Kansas City Center for Research on Interfacial Structure and Properties). Each data set consists of 1,091 data points from 9 OIM bones. The original captured hyperspectral data sets were noisy and base-lined ones. We removed the noise and corrected the base-lined data for the final efficient classification. High dimensional Raman hyperspectral data on OIM bones was reduced by Principal Components Analysis (PCA) and Linear Discriminant Analysis (LDA) and efficiently classified for the first time. We confirmed OIM bones could be classified such as strong, middle and weak one by using the coefficients of their PCA or LDA. Through experiment, we investigated the efficiency of classification on the reduced OIM bone data by the Bayesian classifier and K -Nearest Neighbor (K-NN) classifier. As the experimental result, the case of LDA reduction showed higher classification performance than that of PCA reduction in the two classifiers. K-NN classifier represented better classification rate, compared with Bayesian classifier. The classification performance of K-NN was about 92.6% in case of LDA.