• Title/Summary/Keyword: k-nearest neighbor method

Search Result 313, Processing Time 0.033 seconds

Discriminant Metric Learning Approach for Face Verification

  • Chen, Ju-Chin;Wu, Pei-Hsun;Lien, Jenn-Jier James
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.2
    • /
    • pp.742-762
    • /
    • 2015
  • In this study, we propose a distance metric learning approach called discriminant metric learning (DML) for face verification, which addresses a binary-class problem for classifying whether or not two input images are of the same subject. The critical issue for solving this problem is determining the method to be used for measuring the distance between two images. Among various methods, the large margin nearest neighbor (LMNN) method is a state-of-the-art algorithm. However, to compensate the LMNN's entangled data distribution due to high levels of appearance variations in unconstrained environments, DML's goal is to penalize violations of the negative pair distance relationship, i.e., the images with different labels, while being integrated with LMNN to model the distance relation between positive pairs, i.e., the images with the same label. The likelihoods of the input images, estimated using DML and LMNN metrics, are then weighted and combined for further analysis. Additionally, rather than using the k-nearest neighbor (k-NN) classification mechanism, we propose a verification mechanism that measures the correlation of the class label distribution of neighbors to reduce the false negative rate of positive pairs. From the experimental results, we see that DML can modify the relation of negative pairs in the original LMNN space and compensate for LMNN's performance on faces with large variances, such as pose and expression.

A study on the spatial neighborhood in spatial regression analysis (공간이웃정보를 고려한 공간회귀분석)

  • Kim, Sujung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.3
    • /
    • pp.505-513
    • /
    • 2017
  • Recently, numerous small area estimation studies have been conducted to obtain more detailed and accurate estimation results. Most of these studies have employed spatial regression models, which require a clear definition of spatial neighborhoods. In this study, we introduce the Delaunay triangulation as a method to define spatial neighborhood, and compare this method with the k-nearest neighbor method. A simulation was conducted to determine which of the two methods is more efficient in defining spatial neighborhood, and we demonstrate the performance of the proposed method using a land price data.

Interference Elimination Method of Ultrasonic Sensors Using K-Nearest Neighbor Algorithm (KNN 알고리즘을 활용한 초음파 센서 간 간섭 제거 기법)

  • Im, Hyungchul;Lee, Seongsoo
    • Journal of IKEEE
    • /
    • v.26 no.2
    • /
    • pp.169-175
    • /
    • 2022
  • This paper introduces an interference elimination method using k-nearest neighbor (KNN) algorithm for precise distance estimation by reducing interference between ultrasonic sensors. Conventional methods compare current distance measurement result with previous distance measurement results. If the difference exceeds some thresholds, conventional methods recognize them as interference and exclude them, but they often suffer from imprecise distance prediction. KNN algorithm classifies input values measured by multiple ultrasonic sensors and predicts high accuracy outputs. Experiments of distance measurements are conducted where interference frequently occurs by multiple ultrasound sensors of same type, and the results show that KNN algorithm significantly reduce distance prediction errors. Also the results show that the prediction performance of KNN algorithm is superior to conventional voting methods.

A Pattern Classification Method using Closest Decision Method in k Nearest Neighbor Prototypes (k 근방 원형상에서 최근접 결정법을 이용한 패턴식별법)

  • Kim, Eung-Kyeu;Lee, Soo-Jong
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.833-834
    • /
    • 2008
  • In this paper, a pattern classification method using closest decision method based on the mean of norm in the closet prototype from an input pattern and its k nearest neighbor prototypes is presented to do accurate classification in arbitrary distributed patterns when the number of patterns is very low. Also this method can be used to classify input pattern precisely when the number patterns is very low because this method considers the weight by the difference of variance in prototypes around the discrimination boundary.

  • PDF

An Improved Text Classification Method for Sentiment Classification

  • Wang, Guangxing;Shin, Seong Yoon
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.1
    • /
    • pp.41-48
    • /
    • 2019
  • In recent years, sentiment analysis research has become popular. The research results of sentiment analysis have achieved remarkable results in practical applications, such as in Amazon's book recommendation system and the North American movie box office evaluation system. Analyzing big data based on user preferences and evaluations and recommending hot-selling books and hot-rated movies to users in a targeted manner greatly improve book sales and attendance rate in movies [1, 2]. However, traditional machine learning-based sentiment analysis methods such as the Classification and Regression Tree (CART), Support Vector Machine (SVM), and k-nearest neighbor classification (kNN) had performed poorly in accuracy. In this paper, an improved kNN classification method is proposed. Through the improved method and normalizing of data, the purpose of improving accuracy is achieved. Subsequently, the three classification algorithms and the improved algorithm were compared based on experimental data. Experiments show that the improved method performs best in the kNN classification method, with an accuracy rate of 11.5% and a precision rate of 20.3%.

Data Classification Using the Robbins-Monro Stochastic Approximation Algorithm (로빈스-몬로 확률 근사 알고리즘을 이용한 데이터 분류)

  • Lee, Jae-Kook;Ko, Chun-Taek;Choi, Won-Ho
    • Proceedings of the KIPE Conference
    • /
    • 2005.07a
    • /
    • pp.624-627
    • /
    • 2005
  • This paper presents a new data classification method using the Robbins Monro stochastic approximation algorithm k-nearest neighbor and distribution analysis. To cluster the data set, we decide the centroid of the test data set using k-nearest neighbor algorithm and the local area of data set. To decide each class of the data, the Robbins Monro stochastic approximation algorithm is applied to the decided local area of the data set. To evaluate the performance, the proposed classification method is compared to the conventional fuzzy c-mean method and k-nn algorithm. The simulation results show that the proposed method is more accurate than fuzzy c-mean method, k-nn algorithm and discriminant analysis algorithm.

  • PDF

A Study on Fault Detection and Diagnosis of Gear Damages - A Comparison between Wavelet Transform Analysis and Kullback Discrimination Information - (기어의 이상검지 및 진단에 관한 연구 -Wavelet Transform해석과 KDI의 비교-)

  • Kim, Tae-Gu;Kim, Kwang-Il
    • Journal of the Korean Society of Safety
    • /
    • v.15 no.2
    • /
    • pp.1-7
    • /
    • 2000
  • This paper presents the approach involving fault detection and diagnosis of gears using pattern recognition and Wavelet transform. It describes result of the comparison between KDI (Kullback Discrimination Information) with the nearest neighbor classification rule as one of pattern recognition methods and Wavelet transform to know a way to detect and diagnosis of gear damages experimentally. To model the damages 1) Normal (no defect), 2) one tooth is worn out, 3) All teeth faces are worn out 4) One tooth is broken. The vibration sensor was attached on the bearing housing. This produced the total time history data that is 20 pieces of each condition. We chose the standard data and measure distance between standard and tested data. In Wavelet transform analysis method, the time series data of magnitude in specified frequency (rotary and mesh frequency) were earned. As a result, the monitoring system using Wavelet transform method and KDI with nearest neighbor classification rule successfully detected and classified the damages from the experimental data.

  • PDF

k-Nearest Neighbor Querv Processing using Approximate Indexing in Road Network Databases (도로 네트워크 데이타베이스에서 근사 색인을 이용한 k-최근접 질의 처리)

  • Lee, Sang-Chul;Kim, Sang-Wook
    • Journal of KIISE:Databases
    • /
    • v.35 no.5
    • /
    • pp.447-458
    • /
    • 2008
  • In this paper, we address an efficient processing scheme for k-nearest neighbor queries to retrieve k static objects in road network databases. Existing methods cannot expect a query processing speed-up by index structures in road network databases, since it is impossible to build an index by the network distance, which cannot meet the triangular inequality requirement, essential for index creation, but only possible in a totally ordered set. Thus, these previous methods suffer from a serious performance degradation in query processing. Another method using pre-computed network distances also suffers from a serious storage overhead to maintain a huge amount of pre-computed network distances. To solve these performance and storage problems at the same time, this paper proposes a novel approach that creates an index for moving objects by approximating their network distances and efficiently processes k-nearest neighbor queries by means of the approximate index. For this approach, we proposed a systematic way of mapping each moving object on a road network into the corresponding absolute position in the m-dimensional space. To meet the triangular inequality this paper proposes a new notion of average network distance, and uses FastMap to map moving objects to their corresponding points in the m-dimensional space. After then, we present an approximate indexing algorithm to build an R*-tree, a multidimensional index, on the m-dimensional points of moving objects. The proposed scheme presents a query processing algorithm capable of efficiently evaluating k-nearest neighbor queries by finding k-nearest points (i.e., k-nearest moving objects) from the m-dimensional index. Finally, a variety of extensive experiments verifies the performance enhancement of the proposed approach by performing especially for the real-life road network databases.

Acoustic Emission Source Classification of Finite-width Plate with a Circular Hole Defect using k-Nearest Neighbor Algorithm (k-최근접 이웃 알고리즘을 이용한 원공결함을 갖는 유한 폭 판재의 음향방출 음원분류에 대한 연구)

  • Rhee, Zhang-Kyu;Oh, Jin-Soo
    • Journal of the Korea Safety Management & Science
    • /
    • v.11 no.1
    • /
    • pp.27-33
    • /
    • 2009
  • A study of fracture to material is getting interest in nuclear and aerospace industry as a viewpoint of safety. Acoustic emission (AE) is a non-destructive testing and new technology to evaluate safety on structures. In previous research continuously, all tensile tests on the pre-defected coupons were performed using the universal testing machine, which machine crosshead was move at a constant speed of 5mm/min. This study is to evaluate an AE source characterization of SM45C steel by using k-nearest neighbor classifier, k-NNC. For this, we used K-means clustering as an unsupervised learning method for obtained multi -variate AE main data sets, and we applied k-NNC as a supervised learning pattern recognition algorithm for obtained multi-variate AE working data sets. As a result, the criteria of Wilk's $\lambda$, D&B(Rij) & Tou are discussed.

Cross platform classification of microarrays by rank comparison

  • Lee, Sunho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.2
    • /
    • pp.475-486
    • /
    • 2015
  • Mining the microarray data accumulated in the public data repositories can save experimental cost and time and provide valuable biomedical information. Big data analysis pooling multiple data sets increases statistical power, improves the reliability of the results, and reduces the specific bias of the individual study. However, integrating several data sets from different studies is needed to deal with many problems. In this study, I limited the focus to the cross platform classification that the platform of a testing sample is different from the platform of a training set, and suggested a simple classification method based on rank. This method is compared with the diagonal linear discriminant analysis, k nearest neighbor method and support vector machine using the cross platform real example data sets of two cancers.