• Title/Summary/Keyword: k-nearest Neighbor Learning

Search Result 162, Processing Time 0.03 seconds

Calculating Attribute Weights in K-Nearest Neighbor Algorithms using Information Theory (정보이론을 이용한 K-최근접 이웃 알고리즘에서의 속성 가중치 계산)

  • Lee Chang-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.9
    • /
    • pp.920-926
    • /
    • 2005
  • Nearest neighbor algorithms classify an unseen input instance by selecting similar cases and use the discovered membership to make predictions about the unknown features of the input instance. The usefulness of the nearest neighbor algorithms have been demonstrated sufficiently in many real-world domains. In nearest neighbor algorithms, it is an important issue to assign proper weights to the attributes. Therefore, in this paper, we propose a new method which can automatically assigns to each attribute a weight of its importance with respect to the target attribute. The method has been implemented as a computer program and its effectiveness has been tested on a number of machine learning databases publicly available.

Fuzzy Learning Vector Quantization based on Fuzzy k-Nearest Neighbor Prototypes

  • Roh, Seok-Beom;Jeong, Ji-Won;Ahn, Tae-Chon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.11 no.2
    • /
    • pp.84-88
    • /
    • 2011
  • In this paper, a new competition strategy for learning vector quantization is proposed. The simple competitive strategy used for learning vector quantization moves the winning prototype which is the closest to the newly given data pattern. We propose a new learning strategy based on k-nearest neighbor prototypes as the winning prototypes. The selection of several prototypes as the winning prototypes guarantees that the updating process occurs more frequently. The design is illustrated with the aid of numeric examples that provide a detailed insight into the performance of the proposed learning strategy.

COMPARATIVE ANALYSIS ON MACHINE LEARNING MODELS FOR PREDICTING KOSPI200 INDEX RETURNS

  • Gu, Bonsang;Song, Joonhyuk
    • The Pure and Applied Mathematics
    • /
    • v.24 no.4
    • /
    • pp.211-226
    • /
    • 2017
  • In this paper, machine learning models employed in various fields are discussed and applied to KOSPI200 stock index return forecasting. The results of hyperparameter analysis of the machine learning models are also reported and practical methods for each model are presented. As a result of the analysis, Support Vector Machine and Artificial Neural Network showed a better performance than k-Nearest Neighbor and Random Forest.

Optimization of Transitive Verb-Objective Collocation Dictionary based on k-nearest Neighbor Learning (k-최근점 학습에 기반한 타동사-목적어 연어 사전의 최적화)

  • Kim, Yu-Seop;Zhang, Byoung-Tak;Kim, Yung-Taek
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.3
    • /
    • pp.302-313
    • /
    • 2000
  • In English-Korean machine translation, transitive verb-objective collocation is utilized for accurate translation of an English verbal phrase into Korean. This paper presents an algorithm for correct verb translation based on the k-nearest neighbor learning. The semantic distance is defined on the WordNet for the k-nearest neighbor learning. And we also present algorithms for automatic collocation dictionary optimization. The algorithms extract transitive verb-objective pairs as training examples from large corpora and minimize the examples, considering the tradeoff between translation accuracy and example size. Experiments show that these algorithms optimized collocation dictionary keeping about 90% accuracy for a verb 'build'.

  • PDF

Enhancement of Text Classification Method (텍스트 분류 기법의 발전)

  • Shin, Kwang-Seong;Shin, Seong-Yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.155-156
    • /
    • 2019
  • Traditional machine learning based emotion analysis methods such as Classification and Regression Tree (CART), Support Vector Machine (SVM), and k-nearest neighbor classification (kNN) are less accurate. In this paper, we propose an improved kNN classification method. Improved methods and data normalization achieve the goal of improving accuracy. Then, three classification algorithms and an improved algorithm were compared based on experimental data.

  • PDF

Optimal dwelling time prediction for package tour using K-nearest neighbor classification algorithm

  • Aria Bisma Wahyutama;Mintae Hwang
    • ETRI Journal
    • /
    • v.46 no.3
    • /
    • pp.473-484
    • /
    • 2024
  • We introduce a machine learning-based web application to help travel agents plan a package tour schedule. K-nearest neighbor (KNN) classification predicts the optimal tourists' dwelling time based on a variety of information to automatically generate a convenient tour schedule. A database collected in collaboration with an established travel agency is fed into the KNN algorithm implemented in the Python language, and the predicted dwelling times are sent to the web application via a RESTful application programming interface provided by the Flask framework. The web application displays a page in which the agents can configure the initial data and predict the optimal dwelling time and automatically update the tour schedule. After conducting a performance evaluation by simulating a scenario on a computer running the Windows operating system, the average response time was 1.762 s, and the prediction consistency was 100% over 100 iterations.

Discriminant Metric Learning Approach for Face Verification

  • Chen, Ju-Chin;Wu, Pei-Hsun;Lien, Jenn-Jier James
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.2
    • /
    • pp.742-762
    • /
    • 2015
  • In this study, we propose a distance metric learning approach called discriminant metric learning (DML) for face verification, which addresses a binary-class problem for classifying whether or not two input images are of the same subject. The critical issue for solving this problem is determining the method to be used for measuring the distance between two images. Among various methods, the large margin nearest neighbor (LMNN) method is a state-of-the-art algorithm. However, to compensate the LMNN's entangled data distribution due to high levels of appearance variations in unconstrained environments, DML's goal is to penalize violations of the negative pair distance relationship, i.e., the images with different labels, while being integrated with LMNN to model the distance relation between positive pairs, i.e., the images with the same label. The likelihoods of the input images, estimated using DML and LMNN metrics, are then weighted and combined for further analysis. Additionally, rather than using the k-nearest neighbor (k-NN) classification mechanism, we propose a verification mechanism that measures the correlation of the class label distribution of neighbors to reduce the false negative rate of positive pairs. From the experimental results, we see that DML can modify the relation of negative pairs in the original LMNN space and compensate for LMNN's performance on faces with large variances, such as pose and expression.

Forecasting Sow's Productivity using the Machine Learning Models (머신러닝을 활용한 모돈의 생산성 예측모델)

  • Lee, Min-Soo;Choe, Young-Chan
    • Journal of Agricultural Extension & Community Development
    • /
    • v.16 no.4
    • /
    • pp.939-965
    • /
    • 2009
  • The Machine Learning has been identified as a promising approach to knowledge-based system development. This study aims to examine the ability of machine learning techniques for farmer's decision making and to develop the reference model for using pig farm data. We compared five machine learning techniques: logistic regression, decision tree, artificial neural network, k-nearest neighbor, and ensemble. All models are well performed to predict the sow's productivity in all parity, showing over 87.6% predictability. The model predictability of total litter size are highest at 91.3% in third parity and decreasing as parity increases. The ensemble is well performed to predict the sow's productivity. The neural network and logistic regression is excellent classifier for all parity. The decision tree and the k-nearest neighbor was not good classifier for all parity. Performance of models varies over models used, showing up to 104% difference in lift values. Artificial Neural network and ensemble models have resulted in highest lift values implying best performance among models.

  • PDF

Acoustic Emission Source Classification of Finite-width Plate with a Circular Hole Defect using k-Nearest Neighbor Algorithm (k-최근접 이웃 알고리즘을 이용한 원공결함을 갖는 유한 폭 판재의 음향방출 음원분류에 대한 연구)

  • Rhee, Zhang-Kyu;Oh, Jin-Soo
    • Journal of the Korea Safety Management & Science
    • /
    • v.11 no.1
    • /
    • pp.27-33
    • /
    • 2009
  • A study of fracture to material is getting interest in nuclear and aerospace industry as a viewpoint of safety. Acoustic emission (AE) is a non-destructive testing and new technology to evaluate safety on structures. In previous research continuously, all tensile tests on the pre-defected coupons were performed using the universal testing machine, which machine crosshead was move at a constant speed of 5mm/min. This study is to evaluate an AE source characterization of SM45C steel by using k-nearest neighbor classifier, k-NNC. For this, we used K-means clustering as an unsupervised learning method for obtained multi -variate AE main data sets, and we applied k-NNC as a supervised learning pattern recognition algorithm for obtained multi-variate AE working data sets. As a result, the criteria of Wilk's $\lambda$, D&B(Rij) & Tou are discussed.

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.