• 제목/요약/키워드: $k$NN

검색결과 791건 처리시간 0.03초

유사도 임계치에 근거한 최근접 이웃 집합의 구성 (Formation of Nearest Neighbors Set Based on Similarity Threshold)

  • 이재식;이진천
    • 지능정보연구
    • /
    • 제13권2호
    • /
    • pp.1-14
    • /
    • 2007
  • 사례기반추론은 다양한 예측 문제에 있어서 성공적으로 활용되고 있는 데이터 마이닝 기법 중 하나이다. 사례기반추론 시스템의 예측 성능은 예측에 사용되는 최근접 이웃 집합을 어떻게 구성하느냐에 따라 영향을 받게 된다. 최근접 이웃 집합의 구성에 있어서 대부분의 선행 연구들은 고정된 값인 K개의 사례를 포함시키는 k-NN 방법을 채택해왔다. 그러나 k-NN 방법을 채택하는 사례기반추론 시스템은 k 값을 너무 크게 혹은 작게 설정하게 되면 예측 성능이 저하된다. 본 연구에서는 이러한 문제를 해결하기 위해 최근접 이웃 집합을 구성함에 있어서 유사도의 임계치 자체를 이용하는 s-NN 방법을 제안하였다. UCI의 Machine Learning Repository에서 제공하는 데이터를 사용하여 실험한 결과, s-NN 방법을 적용한 사례기반추론 모델이 k-NN 방법을 적용한 사례기반추론 모델보다 더 우수한 성능을 보여주었다.

  • PDF

Plurality Rule-based Density and Correlation Coefficient-based Clustering for K-NN

  • Aung, Swe Swe;Nagayama, Itaru;Tamaki, Shiro
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제6권3호
    • /
    • pp.183-192
    • /
    • 2017
  • k-nearest neighbor (K-NN) is a well-known classification algorithm, being feature space-based on nearest-neighbor training examples in machine learning. However, K-NN, as we know, is a lazy learning method. Therefore, if a K-NN-based system very much depends on a huge amount of history data to achieve an accurate prediction result for a particular task, it gradually faces a processing-time performance-degradation problem. We have noticed that many researchers usually contemplate only classification accuracy. But estimation speed also plays an essential role in real-time prediction systems. To compensate for this weakness, this paper proposes correlation coefficient-based clustering (CCC) aimed at upgrading the performance of K-NN by leveraging processing-time speed and plurality rule-based density (PRD) to improve estimation accuracy. For experiments, we used real datasets (on breast cancer, breast tissue, heart, and the iris) from the University of California, Irvine (UCI) machine learning repository. Moreover, real traffic data collected from Ojana Junction, Route 58, Okinawa, Japan, was also utilized to lay bare the efficiency of this method. By using these datasets, we proved better processing-time performance with the new approach by comparing it with classical K-NN. Besides, via experiments on real-world datasets, we compared the prediction accuracy of our approach with density peaks clustering based on K-NN and principal component analysis (DPC-KNN-PCA).

STUDY ON APPLICATION OF NEURO-COMPUTER TO NONLINEAR FACTORS FOR TRAVEL OF AGRICULTURAL CRAWLER VEHICLES

  • Inaba, S.;Takase, A.;Inoue, E.;Yada, K.;Hashiguchi, K.
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 2000년도 THE THIRD INTERNATIONAL CONFERENCE ON AGRICULTURAL MACHINERY ENGINEERING. V.II
    • /
    • pp.124-131
    • /
    • 2000
  • In this study, the NEURAL NETWORK (hereinafter referred to as NN) was applied to control of the nonlinear factors for turning movement of the crawler vehicle and experiment was carried out using a small model of crawler vehicle in order to inspect an application of NN. Furthermore, CHAOS NEURAL NETWORK (hereinafter referred to as CNN) was also applied to this control so as to compare with conventional NN. CNN is especially effective for plane in many variables with local minimum which conventional NN is apt to fall into, and it is relatively useful to nonlinear factors. Experiment of turning on the slope of crawler vehicle was performed in order to estimate an adaptability of nonlinear problems by NN and CNN. The inclination angles of the road surface which the vehicles travel on, were respectively 4deg, 8deg, 12deg. These field conditions were selected by the object for changing nonlinear magnitude in turning phenomenon of vehicle. Learning of NN and CNN was carried out by referring to positioning data obtained from measurement at every 15deg in turning. After learning, the sampling data at every 15deg were interpolated based on the constructed learning system of NN and CNN. Learning and simulation programs of NN and CNN were made by C language ("Association of research for algorithm of calculating machine (1992)"). As a result, conventional NN and CNN were available for interpolation of sampling data. Moreover, when nonlinear intensity is not so large under the field condition of small slope, interpolation performance of CNN was a little not so better than NN. However, when nonlinear intensity is large under the field condition of large slope, interpolation performance of CNN was relatively better than NN.

  • PDF

The Method of Continuous Nearest Neighbor Search on Trajectory of Moving Objects

  • Park, Bo-Yoon;Kim, Sang-Ho;Nam, Kwang-Woo;Ryo, Keun-Ho
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2003년도 ISIS 2003
    • /
    • pp.467-470
    • /
    • 2003
  • When user wants to find objects which have the nearest position from him, we use the nearest neighbor (NN) query. The GIS applications, such as navigation system and traffic control system, require processing of NN query for moving objects (MOs). MOs have trajectory with changing their position over time. Therefore, we should be able to find NN object continuously changing over the whole query time when process NN query for MOs, as well as moving nearby on trajectory of query. However, none of previous works consider trajectory information between objects. Therefore, we propose a method of continuous NN query for trajectory of MOs. We call this CTNN (continuous trajectory NN) technique. It ran find constantly valid NN object on the whole query time by considering of trajectory information.

  • PDF

K-nn을 이용한 Hot Deck 기반의 결측치 대체 (Imputation of Missing Data Based on Hot Deck Method Using K-nn)

  • 권순창
    • 한국IT서비스학회지
    • /
    • 제13권4호
    • /
    • pp.359-375
    • /
    • 2014
  • Researchers cannot avoid missing data in collecting data, because some respondents arbitrarily or non-arbitrarily do not answer questions in studies and experiments. Missing data not only increase and distort standard deviations, but also impair the convenience of estimating parameters and the reliability of research results. Despite widespread use of hot deck, researchers have not been interested in it, since it handles missing data in ambiguous ways. Hot deck can be complemented using K-nn, a method of machine learning, which can organize donor groups closest to properties of missing data. Interested in the role of k-nn, this study was conducted to impute missing data based on the hot deck method using k-nn. After setting up imputation of missing data based on hot deck using k-nn as a study objective, deletion of listwise, mean, mode, linear regression, and svm imputation were compared and verified regarding nominal and ratio data types and then, data closest to original values were obtained reasonably. Simulations using different neighboring numbers and the distance measuring method were carried out and better performance of k-nn was accomplished. In this study, imputation of hot deck was re-discovered which has failed to attract the attention of researchers. As a result, this study shall be able to help select non-parametric methods which are less likely to be affected by the structure of missing data and its causes.

A STUDY ON THE PERFORMANCE OF RHODE ISLAND RED, WHITE LEGHORN AND THEIR CROSS WITH NAKED NECK CHICKEN

  • Barua, A.;Devanath, S.C.;Hamid, M.A.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • 제5권1호
    • /
    • pp.25-27
    • /
    • 1992
  • 160 day-old chicks of Rhode Island Red, White Leghorn and their crossbred with Naked neck chicken were reared upto 23 weeks of age at Bangladesh Agricultural University Poultry Farm in order to study the economic traits of birds. RIR had highest body weight gain (1494.39 g), followed by White Leghorn (1392.57 g), $RIR{\times}NN$ (1268.9 g) and White Leghorn ${\times}$ NN (1266.73 g). RIR showed significant difference (p < 0.05) to other groups of birds in body weight gain but difference were insignificant in between other birds. RIR showed better feed conversion ratio (4.72:1) but difference were insignificant (p > 0.05), however, $RIR{\times}NN$ exceled White Leghorn ${\times}$ NN in feed efficiency. $RIR{\times}NN$ had highest livability (90%) while White Leghorn had lowest (85%). Earlier sexual maturity was observed in White Leghorn (163 days) than RIR (168 days) but cross breds were similar in age at sexual maturity RIR were heaviest (1538.89 g) at age at sexual maturity, on the other hand $RIR{\times}NN$ were heavier (1315.39 g) than $WL{\times}NN$ (1306.77 g) at sexual maturity.

k-NN Join Based on LSH in Big Data Environment

  • Ji, Jiaqi;Chung, Yeongjee
    • Journal of information and communication convergence engineering
    • /
    • 제16권2호
    • /
    • pp.99-105
    • /
    • 2018
  • k-Nearest neighbor join (k-NN Join) is a computationally intensive algorithm that is designed to find k-nearest neighbors from a dataset S for every object in another dataset R. Most related studies on k-NN Join are based on single-computer operations. As the data dimensions and data volume increase, running the k-NN Join algorithm on a single computer cannot generate results quickly. To solve this scalability problem, we introduce the locality-sensitive hashing (LSH) k-NN Join algorithm implemented in Spark, an approach for high-dimensional big data. LSH is used to map similar data onto the same bucket, which can reduce the data search scope. In order to achieve parallel implementation of the algorithm on multiple computers, the Spark framework is used to accelerate the computation of distances between objects in a cluster. Results show that our proposed approach is fast and accurate for high-dimensional and big data.

일반엑스선검사 교육용 시뮬레이터 개발을 위한 기계학습 분류모델 비교 (Comparison of Machine Learning Classification Models for the Development of Simulators for General X-ray Examination Education)

  • 이인자;박채연;이준호
    • 대한방사선기술학회지:방사선기술과학
    • /
    • 제45권2호
    • /
    • pp.111-116
    • /
    • 2022
  • In this study, the applicability of machine learning for the development of a simulator for general X-ray examination education is evaluated. To this end, k-nearest neighbor(kNN), support vector machine(SVM) and neural network(NN) classification models are analyzed to present the most suitable model by analyzing the results. Image data was obtained by taking 100 photos each corresponding to Posterior anterior(PA), Posterior anterior oblique(Obl), Lateral(Lat), Fan lateral(Fan lat). 70% of the acquired 400 image data were used as training sets for learning machine learning models and 30% were used as test sets for evaluation. and prediction model was constructed for right-handed PA, Obl, Lat, Fan lat image classification. Based on the data set, after constructing the classification model using the kNN, SVM, and NN models, each model was compared through an error matrix. As a result of the evaluation, the accuracy of kNN was 0.967 area under curve(AUC) was 0.993, and the accuracy of SVM was 0.992 AUC was 1.000. The accuracy of NN was 0.992 and AUC was 0.999, which was slightly lower in kNN, but all three models recorded high accuracy and AUC. In this study, right-handed PA, Obl, Lat, Fan lat images were classified and predicted using the machine learning classification models, kNN, SVM, and NN models. The prediction showed that SVM and NN were the same at 0.992, and AUC was similar at 1.000 and 0.999, indicating that both models showed high predictive power and were applicable to educational simulators.

K_NN 분류기의 메모리 사용과 점진적 학습에 대한 연구 (A Study on the Storage Requirement and Incremental Learning of the k-NN Classifier)

  • 이형일;윤충화
    • 정보학연구
    • /
    • 제1권1호
    • /
    • pp.65-84
    • /
    • 1998
  • 메모리 기반 추론 기법은 분류시 입력 패턴과 저장된 패턴들 사이의 거리를 이용하는 교사 학습 기법으로써, 거리 기반 학습 알고리즘이라고도 한다. 메모리 기반 추론은 k_NN 분류기에 기반한 것으로, 학습은 추가 처리 없이 단순히 학습 패턴들을 메모리에 저장함으로써 수행된다. 본 논문에서는 기존의 k-NN 분류기보다 효율적인 분류가 가능하고, 점진적 학습 기능을 갖는 새로운 알고리즘을 제안한다. 또한 제안된 기법은 노이즈에 민감하지 않으며, 효율적인 메모리 사용을 보장한다.

  • PDF

GAVaPS를 이용한 다수 K-Nearest Neighbor classifier들의 Feature 선택 (Feature Selection for Multiple K-Nearest Neighbor classifiers using GAVaPS)

  • 이희성;이제헌;김은태
    • 한국지능시스템학회논문지
    • /
    • 제18권6호
    • /
    • pp.871-875
    • /
    • 2008
  • 본 논문은 개체 변환 유전자 알고리즘을 (GAVaPS) 이용하여 k-nearest neighbor (k-NN) 분류기에서 사용되는 특징들을 선정하는 방법을 제시한다. 우리는 다수의 k-NN 분류기들을 사용하기 때문에 사용되는 특징들을 선정하는 문제는 매우 탐색 영역이 크고 해결하기 어려운 문제이다. 따라서 우리는 효과적인 특징득의 선정을 위해 일반적인 유전자 알고리즘 (GA) 보다 효율적이라고 알려진 개체군 변환 유전자 알고리즘을 사용한다. 또한 다수 k-NN 분류기를 개체군 변환 유전자 알고리즘으로 효과적으로 결합하는 방법을 제시한다. 제안하는 알고리즘의 우수성을 여러 실험을 통해 보여준다.