• Title/Summary/Keyword: k-NN Method

Search Result 307, Processing Time 0.024 seconds

Feature Selection for Multiple K-Nearest Neighbor classifiers using GAVaPS (GAVaPS를 이용한 다수 K-Nearest Neighbor classifier들의 Feature 선택)

  • Lee, Hee-Sung;Lee, Jae-Hun;Kim, Eun-Tai
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.6
    • /
    • pp.871-875
    • /
    • 2008
  • This paper deals with the feature selection for multiple k-nearest neighbor (k-NN) classifiers using Genetic Algorithm with Varying reputation Size (GAVaPS). Because we use multiple k-NN classifiers, the feature selection problem for them is vary hard and has large search region. To solve this problem, we employ the GAVaPS which outperforms comparison with simple genetic algorithm (SGA). Further, we propose the efficient combining method for multiple k-NN classifiers using GAVaPS. Experiments are performed to demonstrate the efficiency of the proposed method.

Density Adaptive Grid-based k-Nearest Neighbor Regression Model for Large Dataset (대용량 자료에 대한 밀도 적응 격자 기반의 k-NN 회귀 모형)

  • Liu, Yiqi;Uk, Jung
    • Journal of Korean Society for Quality Management
    • /
    • v.49 no.2
    • /
    • pp.201-211
    • /
    • 2021
  • Purpose: This paper proposes a density adaptive grid algorithm for the k-NN regression model to reduce the computation time for large datasets without significant prediction accuracy loss. Methods: The proposed method utilizes the concept of the grid with centroid to reduce the number of reference data points so that the required computation time is much reduced. Since the grid generation process in this paper is based on quantiles of original variables, the proposed method can fully reflect the density information of the original reference data set. Results: Using five real-life datasets, the proposed k-NN regression model is compared with the original k-NN regression model. The results show that the proposed density adaptive grid-based k-NN regression model is superior to the original k-NN regression in terms of data reduction ratio and time efficiency ratio, and provides a similar prediction error if the appropriate number of grids is selected. Conclusion: The proposed density adaptive grid algorithm for the k-NN regression model is a simple and effective model which can help avoid a large loss of prediction accuracy with faster execution speed and fewer memory requirements during the testing phase.

A Study on the Storage Requirement and Incremental Learning of the k-NN Classifier (K_NN 분류기의 메모리 사용과 점진적 학습에 대한 연구)

  • 이형일;윤충화
    • The Journal of Information Technology
    • /
    • v.1 no.1
    • /
    • pp.65-84
    • /
    • 1998
  • The MBR (Memory Based Reasoning) is a supervised learning method that utilizes the distances among the input and trained patterns in its classification, and is also called a distance based learning algorithm. The MBR is based on the k-NN classifier, in which teaming is performed by simply storing training patterns in the memory without any further processing. This paper proposes a new learning algorithm which is more efficient than the traditional k-NN classifier and has incremental learning capability, Furthermore, our proposed algorithm is insensitive to noisy patterns, and guarantees more efficient memory usage.

  • PDF

Comparison of EEG Topography Labeling and Annotation Labeling Techniques for EEG-based Emotion Recognition (EEG 기반 감정인식을 위한 주석 레이블링과 EEG Topography 레이블링 기법의 비교 고찰)

  • Ryu, Je-Woo;Hwang, Woo-Hyun;Kim, Deok-Hwan
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.3
    • /
    • pp.16-24
    • /
    • 2019
  • Recently, research on emotion recognition based on EEG has attracted great interest from human-robot interaction field. In this paper, we propose a method of labeling using image-based EEG topography instead of evaluating emotions through self-assessment and annotation labeling methods used in MAHNOB HCI. The proposed method evaluates the emotion by machine learning model that learned EEG signal transformed into topographical image. In the experiments using MAHNOB-HCI database, we compared the performance of training EEG topography labeling models of SVM and kNN. The accuracy of the proposed method was 54.2% in SVM and 57.7% in kNN.

An Improvement in K-NN Graph Construction using re-grouping with Locality Sensitive Hashing on MapReduce (MapReduce 환경에서 재그룹핑을 이용한 Locality Sensitive Hashing 기반의 K-Nearest Neighbor 그래프 생성 알고리즘의 개선)

  • Lee, Inhoe;Oh, Hyesung;Kim, Hyoung-Joo
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.11
    • /
    • pp.681-688
    • /
    • 2015
  • The k nearest neighbor (k-NN) graph construction is an important operation with many web-related applications, including collaborative filtering, similarity search, and many others in data mining and machine learning. Despite its many elegant properties, the brute force k-NN graph construction method has a computational complexity of $O(n^2)$, which is prohibitive for large scale data sets. Thus, (Key, Value)-based distributed framework, MapReduce, is gaining increasingly widespread use in Locality Sensitive Hashing which is efficient for high-dimension and sparse data. Based on the two-stage strategy, we engage the locality sensitive hashing technique to divide users into small subsets, and then calculate similarity between pairs in the small subsets using a brute force method on MapReduce. Specifically, generating a candidate group stage is important since brute-force calculation is performed in the following step. However, existing methods do not prevent large candidate groups. In this paper, we proposed an efficient algorithm for approximate k-NN graph construction by regrouping candidate groups. Experimental results show that our approach is more effective than existing methods in terms of graph accuracy and scan rate.

Reverse k-Nearest Neighbor Query Processing Method for Continuous Query Processing in Bigdata Environments (빅데이터 환경에서 연속 질의 처리를 위한 리버스 k-최근접 질의 처리 기법)

  • Lim, Jongtae;Park, Sunyong;Seo, Kiwon;Lee, Minho;Bok, Kyoungsoo;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.10
    • /
    • pp.454-462
    • /
    • 2014
  • With the development of location aware technologies and mobile devices, location-based services have been studied. To provide location-based services, many researchers proposed methods for processing various query types with Mapreduce(MR). One of the proposed methods, is a Reverse k-nearest neighbor(RkNN) query processing method with MR. However, the existing methods spend too much cost to process the continuous RkNN query. In this paper, we propose an efficient continuous RkNN query processing method with MR to resolve the problems of the existing methods. The proposed method uses the 60-degree-pruning method. The proposed method does not need to reprocess the query for continuous query processing because the proposed method draws and monitors the monitoring area including the candidate objects of a RkNN query. In order to show the superiority of the proposed method, we compare it with the query processing performance of the existing method.

Enhancement of Text Classification Method (텍스트 분류 기법의 발전)

  • Shin, Kwang-Seong;Shin, Seong-Yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.155-156
    • /
    • 2019
  • Traditional machine learning based emotion analysis methods such as Classification and Regression Tree (CART), Support Vector Machine (SVM), and k-nearest neighbor classification (kNN) are less accurate. In this paper, we propose an improved kNN classification method. Improved methods and data normalization achieve the goal of improving accuracy. Then, three classification algorithms and an improved algorithm were compared based on experimental data.

  • PDF

An Improved Text Classification Method for Sentiment Classification

  • Wang, Guangxing;Shin, Seong Yoon
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.1
    • /
    • pp.41-48
    • /
    • 2019
  • In recent years, sentiment analysis research has become popular. The research results of sentiment analysis have achieved remarkable results in practical applications, such as in Amazon's book recommendation system and the North American movie box office evaluation system. Analyzing big data based on user preferences and evaluations and recommending hot-selling books and hot-rated movies to users in a targeted manner greatly improve book sales and attendance rate in movies [1, 2]. However, traditional machine learning-based sentiment analysis methods such as the Classification and Regression Tree (CART), Support Vector Machine (SVM), and k-nearest neighbor classification (kNN) had performed poorly in accuracy. In this paper, an improved kNN classification method is proposed. Through the improved method and normalizing of data, the purpose of improving accuracy is achieved. Subsequently, the three classification algorithms and the improved algorithm were compared based on experimental data. Experiments show that the improved method performs best in the kNN classification method, with an accuracy rate of 11.5% and a precision rate of 20.3%.

Noise-Reduction of Student's Learning Data using k-NN Method (k-NN 기법을 이용한 학습자 데이터의 노이즈 선별 방법)

  • Yun, Tae-Bok;Lee, Ji-Hyeong;Jeong, Yeong-Mo;Cha, Hyeon-Jin;Park, Seon-Hui;Kim, Yong-Se
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.11a
    • /
    • pp.135-138
    • /
    • 2006
  • 사용자 모델링을 위해서는 사용자의 성향 및 행위 등의 다양한 정보를 수집하여 분석에 이용한다. 하지만 사용자(인간)로 부터 얻은 데이터는 기계나 환경에서 수집된 데이터 보다 패턴을 찾기 힘들어 모델링하기 어렵다. 그 이유는 사용자는 사용자의 현재 상태와 상황에 따라 다양한 결과를 보이며, 일관성을 유지 하지 않는 경우가 있기 때문이다. 사용자 모델링을 위해서는 분산되어 있는 데이터에서 노이즈를 선별하고 연관성 있는 데이터를 분류할 수 있는 기술이 필요하다. 본 논문은 사용자로 부터 수집된 데이터를 k-NN(Nearest Neighbor) 기법을 이용하여 노이즈를 선별한다. 노이즈가 제거된 데이터는 의사결정나무(Decision Tree)방법을 이용하여 학습하였고, 노이즈가 분류되기 전과 비교 분석 하였다. 실험에서는 홈 인테리어 학습 컨텐츠인 DOLLS-HI를 이용하여 수집된 학습자의 데이터를 이용하였고, 생성된 학습자 모델링의 신뢰도가 높아지는 것을 확인하였다.

  • PDF

Calculating Attribute Weights in K-Nearest Neighbor Algorithms using Information Theory (정보이론을 이용한 K-최근접 이웃 알고리즘에서의 속성 가중치 계산)

  • Lee Chang-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.9
    • /
    • pp.920-926
    • /
    • 2005
  • Nearest neighbor algorithms classify an unseen input instance by selecting similar cases and use the discovered membership to make predictions about the unknown features of the input instance. The usefulness of the nearest neighbor algorithms have been demonstrated sufficiently in many real-world domains. In nearest neighbor algorithms, it is an important issue to assign proper weights to the attributes. Therefore, in this paper, we propose a new method which can automatically assigns to each attribute a weight of its importance with respect to the target attribute. The method has been implemented as a computer program and its effectiveness has been tested on a number of machine learning databases publicly available.