• Title/Summary/Keyword: Nearest Neighbors

Search Result 223, Processing Time 0.023 seconds

Gait Recognition Based on GF-CNN and Metric Learning

  • Wen, Junqin
    • Journal of Information Processing Systems
    • /
    • v.16 no.5
    • /
    • pp.1105-1112
    • /
    • 2020
  • Gait recognition, as a promising biometric, can be used in video-based surveillance and other security systems. However, due to the complexity of leg movement and the difference of external sampling conditions, gait recognition still faces many problems to be addressed. In this paper, an improved convolutional neural network (CNN) based on Gabor filter is therefore proposed to achieve gait recognition. Firstly, a gait feature extraction layer based on Gabor filter is inserted into the traditional CNNs, which is used to extract gait features from gait silhouette images. Then, in the process of gait classification, using the output of CNN as input, we utilize metric learning techniques to calculate distance between two gaits and achieve gait classification by k-nearest neighbors classifiers. Finally, several experiments are conducted on two open-accessed gait datasets and demonstrate that our method reaches state-of-the-art performances in terms of correct recognition rate on the OULP and CASIA-B datasets.

KNN-based Image Annotation by Collectively Mining Visual and Semantic Similarities

  • Ji, Qian;Zhang, Liyan;Li, Zechao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4476-4490
    • /
    • 2017
  • The aim of image annotation is to determine labels that can accurately describe the semantic information of images. Many approaches have been proposed to automate the image annotation task while achieving good performance. However, in most cases, the semantic similarities of images are ignored. Towards this end, we propose a novel Visual-Semantic Nearest Neighbor (VS-KNN) method by collectively exploring visual and semantic similarities for image annotation. First, for each label, visual nearest neighbors of a given test image are constructed from training images associated with this label. Second, each neighboring subset is determined by mining the semantic similarity and the visual similarity. Finally, the relevance between the images and labels is determined based on maximum a posteriori estimation. Extensive experiments were conducted using three widely used image datasets. The experimental results show the effectiveness of the proposed method in comparison with state-of-the-arts methods.

Design of Free Viewpoint TV System with MS Kinects (MS Kinect 를 이용한 Free Viewpoint TV System 설계)

  • Lee, Jun Hyeop;Yang, Yun Mo;Oh, Byung Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.122-124
    • /
    • 2015
  • This paper provides the design and implementation of Free Viewpoint TV System with multiple Microsoft Kinects. It generates a virtual view between two views by manipulating texture and depth image captured by Kinects in real-time. In order to avoid this, we propose the hole-filling scheme using Nearest neighbor and inpainting. As a result, holes generated by interference are filled with new depth values calculated by their neighbors. However, the depth values are not accurate, but are similar with their neighbors. And depending on the frequency of running a Nearest Neighbor method, we can see that edge's border would be shifted inner or outer of the object.

  • PDF

A Classification Algorithm Based on Data Clustering and Data Reduction for Intrusion Detection System over Big Data

  • Wang, Qiuhua;Ouyang, Xiaoqin;Zhan, Jiacheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3714-3732
    • /
    • 2019
  • With the rapid development of network, Intrusion Detection System(IDS) plays a more and more important role in network applications. Many data mining algorithms are used to build IDS. However, due to the advent of big data era, massive data are generated. When dealing with large-scale data sets, most data mining algorithms suffer from a high computational burden which makes IDS much less efficient. To build an efficient IDS over big data, we propose a classification algorithm based on data clustering and data reduction. In the training stage, the training data are divided into clusters with similar size by Mini Batch K-Means algorithm, meanwhile, the center of each cluster is used as its index. Then, we select representative instances for each cluster to perform the task of data reduction and use the clusters that consist of representative instances to build a K-Nearest Neighbor(KNN) detection model. In the detection stage, we sort clusters according to the distances between the test sample and cluster indexes, and obtain k nearest clusters where we find k nearest neighbors. Experimental results show that searching neighbors by cluster indexes reduces the computational complexity significantly, and classification with reduced data of representative instances not only improves the efficiency, but also maintains high accuracy.

Multi-Style License Plate Recognition System using K-Nearest Neighbors

  • Park, Soungsill;Yoon, Hyoseok;Park, Seho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.5
    • /
    • pp.2509-2528
    • /
    • 2019
  • There are various styles of license plates for different countries and use cases that require style-specific methods. In this paper, we propose and illustrate a multi-style license plate recognition system. The proposed system performs a series of processes for license plate candidates detection, structure classification, character segmentation and character recognition, respectively. Specifically, we introduce a license plate structure classification process to identify its style that precedes character segmentation and recognition processes. We use a K-Nearest Neighbors algorithm with pre-training steps to recognize numbers and characters on multi-style license plates. To show feasibility of our multi-style license plate recognition system, we evaluate our system for multi-style license plates covering single line, double line, different backgrounds and character colors on Korean and the U.S. license plates. For the evaluation of Korean license plate recognition, we used a 50 minutes long input video that contains 138 vehicles of 6 different license plate styles, where each frame of the video is processed through a series of license plate recognition processes. From two experiments results, we show that various LP styles can be recognized under 50 ms processing time and with over 99% accuracy, and can be extended through additional learning and training steps.

Nearest-Neighbor Collaborative Filtering Using Dimensionality Reduction by Non-negative Matrix Factorization (비부정 행렬 인수분해 차원 감소를 이용한 최근 인접 협력적 여과)

  • Ko, Su-Jeong
    • The KIPS Transactions:PartB
    • /
    • v.13B no.6 s.109
    • /
    • pp.625-632
    • /
    • 2006
  • Collaborative filtering is a technology that aims at teaming predictive models of user preferences. Collaborative filtering systems have succeeded in Ecommerce market but they have shortcomings of high dimensionality and sparsity. In this paper we propose the nearest neighbor collaborative filtering method using non-negative matrix factorization(NNMF). We replace the missing values in the user-item matrix by using the user variance coefficient method as preprocessing for matrix decomposition and apply non-negative factorization to the matrix. The positive decomposition method using the non-negative decomposition represents users as semantic vectors and classifies the users into groups based on semantic relations. We compute the similarity between users by using vector similarity and selects the nearest neighbors based on the similarity. We predict the missing values of items that didn't rate by a new user based on the values that the nearest neighbors rated items.

A study on neighbor selection methods in k-NN collaborative filtering recommender system (근접 이웃 선정 협력적 필터링 추천시스템에서 이웃 선정 방법에 관한 연구)

  • Lee, Seok-Jun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.5
    • /
    • pp.809-818
    • /
    • 2009
  • Collaborative filtering approach predicts the preference of active user about specific items transacted on the e-commerce by using others' preference information. To improve the prediction accuracy through collaborative filtering approach, it must be needed to gain enough preference information of users' for predicting preference. But, a bit much information of users' preference might wrongly affect on prediction accuracy, and also too small information of users' preference might make bad effect on the prediction accuracy. This research suggests the method, which decides suitable numbers of neighbor users for applying collaborative filtering algorithm, improved by existing k nearest neighbors selection methods. The result of this research provides useful methods for improving the prediction accuracy and also refines exploratory data analysis approach for deciding appropriate numbers of nearest neighbors.

  • PDF

Development of an Evaluation Index for Identifying Freeway Traffic Safety Based on Integrating RWIS and VDS Data (기상 및 교통 자료를 이용한 교통류 안전성 판단 지표 개발)

  • Park, Hyunjin;Joo, Shinhye;Oh, Cheol
    • Journal of Korean Society of Transportation
    • /
    • v.32 no.5
    • /
    • pp.441-451
    • /
    • 2014
  • This study proposes a novel performance measure, which is referred to as Hazardous Spacing Index (HSI), to be used for evaluating safety of traffic stream on freeways. The basic principle of the proposed methodology is to investigate whether drivers would have sufficient stopping sight distance (SSD) under limited visibility conditions to eliminate rear-end crash potentials at every time step. Both Road Weather Information Systems (RWIS) and Vehicle Detection Systems (VDS) data were used to derive visibility distance (VD) and SSD, respectively. Moreover, the K-Nearest Neighbors (KNN) method was adopted to predict both VD and SSD in estimating predictive HSIs, which would be used to trigger advanced warning information to encourage safer driving. The outcome of this study is also expected to be used for monitoring freeway traffic stream in terms of safety.

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.

A KD-Tree-Based Nearest Neighbor Search for Large Quantities of Data

  • Yen, Shwu-Huey;Hsieh, Ya-Ju
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.459-470
    • /
    • 2013
  • The discovery of nearest neighbors, without training in advance, has many applications, such as the formation of mosaic images, image matching, image retrieval and image stitching. When the quantity of data is huge and the number of dimensions is high, the efficient identification of a nearest neighbor (NN) is very important. This study proposes a variation of the KD-tree - the arbitrary KD-tree (KDA) - which is constructed without the need to evaluate variances. Multiple KDAs can be constructed efficiently and possess independent tree structures, when the amount of data is large. Upon testing, using extended synthetic databases and real-world SIFT data, this study concludes that the KDA method increases computational efficiency and produces satisfactory accuracy, when solving NN problems.