• Title/Summary/Keyword: K-Nearest Neighbor

Search Result 641, Processing Time 0.024 seconds

Image Tracking Algorithm using Template Matching and PSNF-m

  • Bae, Jong-Sue;Song, Taek-Lyul
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.3
    • /
    • pp.413-423
    • /
    • 2008
  • The template matching method is used as a simple method to track objects or patterns that we want to search for in the input image data from image sensors. It recognizes a segment with the highest correlation as a target. The concept of this method is similar to that of SNF (Strongest Neighbor Filter) that regards the measurement with the highest signal intensity as target-originated among other measurements. The SNF assumes that the strongest neighbor (SN) measurement in the validation gate originates from the target of interest and the SNF utilizes the SN in the update step of a standard Kalman filter (SKF). The SNF is widely used along with the nearest neighbor filter (NNF), due to computational simplicity in spite of its inconsistency of handling the SN as if it is the true target. Probabilistic Strongest Neighbor Filter for m validated measurements (PSNF-m) accounts for the probability that the SN in the validation gate originates from the target while the SNF assumes at any time that the SN measurement is target-originated. It is known that the PSNF-m is superior to the SNF in performance at a cost of increased computational load. In this paper, we suggest an image tracking algorithm that combines the template matching and the PSNF-m to estimate the states of a tracked target. Computer simulation results are included to demonstrate the performance of the proposed algorithm in comparison with other algorithms.

HOT GAS HALOS IN EARLY-TYPE GALAXIES AND ENVIRONMENTS

  • Kim, Eunbin;Choi, Yun-Young;Kim, Sungsoo S.
    • Journal of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.33-40
    • /
    • 2013
  • We investigate the dependence of the extended X-ray emission from the halos of optically luminous early-type galaxies on the small-scale (the nearest neighbor distance) and large-scale (the average density inside the 20 nearest galaxies) environments. We cross-match the 3rd Data Release of the Second XMMNewton Serendipitous Source Catalog (2XMMi-DR3) to a volume-limited sample of the Sloan Digital Sky Survey (SDSS) Data Release 7 with $M_r$ < -19.5 and 0.020 < z < 0.085, and find 20 early-type galaxies that have extended X-ray detections. The X-ray luminosity of the galaxies is found to have a tighter correlation with the optical and near infrared luminosities when the galaxy is situated in the low large-scale density region than in the high large-scale density region. Furthermore, the X-ray to optical (r-band) luminosity ratio, $L_X/L_r$, shows a clear correlation with the distance to the nearest neighbor and with large-scale density environment only where the galaxies in pair interact hydrodynamically with seperations of $r_p$ < $r_{vir}$. These findings indicate that the galaxies in the high local density region have other mechanisms that are responsible for their halo X-ray luminosities than the current presence of a close encounter, or alternatively, in the high local density region the cooling time of the heated gas halo is longer than the typical time between the subsequent encounters.

k-Nearest Neighbor Learning with Varying Norms (놈(Norm)에 따른 k-최근접 이웃 학습의 성능 변화)

  • Kim, Doo-Hyeok;Kim, Chan-Ju;Hwang, Kyu-Baek
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06c
    • /
    • pp.371-375
    • /
    • 2008
  • 예제 기반 학습(instance-based learning) 방법 중 하나인 k-최근접 이웃(k-nearest reighbor, k-NN) 학습은 간단하고 예측 정확도가 비교적 높아 분류 및 회귀 문제 해결을 위한 기반 방법론으로 널리 적용되고 있다. k-NN 학습을 위한 알고리즘은 기본적으로 유클리드 거리 혹은 2-놈(norm)에 기반하여 학습예제들 사이의 거리를 계산한다. 본 논문에서는 유클리드 거리를 일반화한 개념인 p-놈의 사용이 k-NN 학습의 성능에 어떠한 영향을 미치는지 연구하였다. 구체적으로 합성데이터와 다수의 기계학습 벤치마크 문제 및 실제 데이터에 다양한 p-놈을 적용하여 그 일반화 성능을 경험적으로 조사하였다. 실험 결과, 데이터에 잡음이 많이 존재하거나 문제가 어려운 경우에 p의 값을 작게 하는 것이 성능을 향상시킬 수 있었다.

  • PDF

k-NN Join Based on LSH in Big Data Environment

  • Ji, Jiaqi;Chung, Yeongjee
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.2
    • /
    • pp.99-105
    • /
    • 2018
  • k-Nearest neighbor join (k-NN Join) is a computationally intensive algorithm that is designed to find k-nearest neighbors from a dataset S for every object in another dataset R. Most related studies on k-NN Join are based on single-computer operations. As the data dimensions and data volume increase, running the k-NN Join algorithm on a single computer cannot generate results quickly. To solve this scalability problem, we introduce the locality-sensitive hashing (LSH) k-NN Join algorithm implemented in Spark, an approach for high-dimensional big data. LSH is used to map similar data onto the same bucket, which can reduce the data search scope. In order to achieve parallel implementation of the algorithm on multiple computers, the Spark framework is used to accelerate the computation of distances between objects in a cluster. Results show that our proposed approach is fast and accurate for high-dimensional and big data.

kNNDD-based One-Class Classification by Nonparametric Density Estimation (비모수 추정방법을 활용한 kNNDD의 이상치 탐지 기법)

  • Son, Jung-Hwan;Kim, Seoung-Bum
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.38 no.3
    • /
    • pp.191-197
    • /
    • 2012
  • One-class classification (OCC) is one of the recent growing areas in data mining and pattern recognition. In the present study we examine a k-nearest neighbors data description (kNNDD) algorithm, one of the OCC algorithms widely used. In particular, we propose to use nonparametric estimation methods to determine the threshold of the kNNDD algorithm. A simulation study has been conducted to explore the characteristics of the proposed approach and compare it with the existing approach that determines the threshold. The results demonstrate the usefulness and flexibility of the proposed approach.

Impact of Instance Selection on kNN-Based Text Categorization

  • Barigou, Fatiha
    • Journal of Information Processing Systems
    • /
    • v.14 no.2
    • /
    • pp.418-434
    • /
    • 2018
  • With the increasing use of the Internet and electronic documents, automatic text categorization becomes imperative. Several machine learning algorithms have been proposed for text categorization. The k-nearest neighbor algorithm (kNN) is known to be one of the best state of the art classifiers when used for text categorization. However, kNN suffers from limitations such as high computation when classifying new instances. Instance selection techniques have emerged as highly competitive methods to improve kNN through data reduction. However previous works have evaluated those approaches only on structured datasets. In addition, their performance has not been examined over the text categorization domain where the dimensionality and size of the dataset is very high. Motivated by these observations, this paper investigates and analyzes the impact of instance selection on kNN-based text categorization in terms of various aspects such as classification accuracy, classification efficiency, and data reduction.

Short-term Traffic States Prediction Using k-Nearest Neighbor Algorithm: Focused on Urban Expressway in Seoul (k-NN 알고리즘을 활용한 단기 교통상황 예측: 서울시 도시고속도로 사례)

  • KIM, Hyungjoo;PARK, Shin Hyoung;JANG, Kitae
    • Journal of Korean Society of Transportation
    • /
    • v.34 no.2
    • /
    • pp.158-167
    • /
    • 2016
  • This study evaluates potential sources of errors in k-NN(k-nearest neighbor) algorithm such as procedures, variables, and input data. Previous research has been thoroughly reviewed for understanding fundamentals of k-NN algorithm that has been widely used for short-term traffic states prediction. The framework of this algorithm commonly includes historical data smoothing, pattern database, similarity measure, k-value, and prediction horizon. The outcomes of this study suggests that: i) historical data smoothing is recommended to reduce random noise of measured traffic data; ii) the historical database should contain traffic state information on both normal and event conditions; and iii) trial and error method can improve the prediction accuracy by better searching for the optimum input time series and k-value. The study results also demonstrates that predicted error increases with the duration of prediction horizon and rapidly changing traffic states.

An Improvement in K-NN Graph Construction using re-grouping with Locality Sensitive Hashing on MapReduce (MapReduce 환경에서 재그룹핑을 이용한 Locality Sensitive Hashing 기반의 K-Nearest Neighbor 그래프 생성 알고리즘의 개선)

  • Lee, Inhoe;Oh, Hyesung;Kim, Hyoung-Joo
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.11
    • /
    • pp.681-688
    • /
    • 2015
  • The k nearest neighbor (k-NN) graph construction is an important operation with many web-related applications, including collaborative filtering, similarity search, and many others in data mining and machine learning. Despite its many elegant properties, the brute force k-NN graph construction method has a computational complexity of $O(n^2)$, which is prohibitive for large scale data sets. Thus, (Key, Value)-based distributed framework, MapReduce, is gaining increasingly widespread use in Locality Sensitive Hashing which is efficient for high-dimension and sparse data. Based on the two-stage strategy, we engage the locality sensitive hashing technique to divide users into small subsets, and then calculate similarity between pairs in the small subsets using a brute force method on MapReduce. Specifically, generating a candidate group stage is important since brute-force calculation is performed in the following step. However, existing methods do not prevent large candidate groups. In this paper, we proposed an efficient algorithm for approximate k-NN graph construction by regrouping candidate groups. Experimental results show that our approach is more effective than existing methods in terms of graph accuracy and scan rate.

Visual Classification of Wood Knots Using k-Nearest Neighbor and Convolutional Neural Network (k-Nearest Neighbor와 Convolutional Neural Network에 의한 제재목 표면 옹이 종류의 화상 분류)

  • Kim, Hyunbin;Kim, Mingyu;Park, Yonggun;Yang, Sang-Yun;Chung, Hyunwoo;Kwon, Ohkyung;Yeo, Hwanmyeong
    • Journal of the Korean Wood Science and Technology
    • /
    • v.47 no.2
    • /
    • pp.229-238
    • /
    • 2019
  • Various wood defects occur during tree growing or wood processing. Thus, to use wood practically, it is necessary to objectively assess their quality based on the usage requirement by accurately classifying their defects. However, manual visual grading and species classification may result in differences due to subjective decisions; therefore, computer-vision-based image analysis is required for the objective evaluation of wood quality and the speeding up of wood production. In this study, the SIFT+k-NN and CNN models were used to implement a model that automatically classifies knots and analyze its accuracy. Toward this end, a total of 1,172 knot images in various shapes from five domestic conifers were used for learning and validation. For the SIFT+k-NN model, SIFT technology was used to extract properties from the knot images and k-NN was used for the classification, resulting in the classification with an accuracy of up to 60.53% when k-index was 17. The CNN model comprised 8 convolution layers and 3 hidden layers, and its maximum accuracy was 88.09% after 1205 epoch, which was higher than that of the SIFT+k-NN model. Moreover, if there is a large difference in the number of images by knot types, the SIFT+k-NN tended to show a learning biased toward the knot type with a higher number of images, whereas the CNN model did not show a drastic bias regardless of the difference in the number of images. Therefore, the CNN model showed better performance in knot classification. It is determined that the wood knot classification by the CNN model will show a sufficient accuracy in its practical applicability.

사례기반추론 모델의 최근접 이웃 설정을 위한 Similarity Threshold의 사용

  • Lee, Jae-Sik;Lee, Jin-Cheon
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2005.11a
    • /
    • pp.588-594
    • /
    • 2005
  • 사례기반추론(Case-Based Reasoning)은 다양한 예측 문제에 있어서 성공적으로 활용되고 있는 데이터마이닝 기법 중 하나이다. 사례기반추론 시스템의 예측 성능은 예측에 사용되는 최근접이웃(Nearest Neighbor)을 어떻게 설정하느냐에 따라 영향을 받게 된다. 따라서 최근접 이웃을 결정짓는 k 값의 설정은 성공적인 사례기반추론 시스템을 구축하기 위한 중요 요인 중 하나가 된다. 최근접 이웃의 설정에 있어서 대부분의 선행 연구들은 고정된 k 값을 사용하는 사례기반추론 시스템은 k 값을 크게 설정할 경우 최근접 이웃 안에 주어진 오류를 일으킬 수 있으며, k 값이 작게 설정된 경우에는 유사 사례 중 일부만을 예측에 사용하기 때문에 예측 결과의 왜곡을 초래할 수 있다. 본 이웃을 결정함에 있어서 Similarity Threshold를 이용하는 s-NN 방법을 제안하였다. 본 연구의 실험을 위해 UCI(University of california, Irvine) Machine Learning Repository에서 제공하는 두 개의 신용 데이터 셋을 사용하였으며, 실험 결과 s-NN 적용한 CBR 모델이 고정된 k 값을 적용한 전통적인 CBR 모델보다 더 우수한 성능을 보여주었다.

  • PDF