• Title/Summary/Keyword: Nearest Neighbor Search

Search Result 123, Processing Time 0.031 seconds

A Hybrid Index of Voronoi and Grid Partition for NN Search

  • Seokjin Im
    • International journal of advanced smart convergence
    • /
    • v.12 no.1
    • /
    • pp.1-8
    • /
    • 2023
  • Smart IoT over high speed network and high performance smart devices explodes the ubiquitous services and applications. Nearest Neighbor(NN) query is one of the important type of queries that have to be supported for ubiquitous information services. In order to process efficiently NN queries in the wireless broadcast environment, it is important that the clients determine quickly the search space and filter out NN from the candidates containing the search space. In this paper, we propose a hybrid index of Voronoi and grid partition to provide quick search space decision and rapid filtering out NN from the candidates. Grid partition plays the role of helping quick search space decision and Voronoi partition providing the rapid filtering. We show the effectiveness of the proposed index by comparing the existing indexing schemes in the access time and tuning time. The evaluation shows the proposed index scheme makes the two performance parameters improved than the existing schemes.

A Study on the Fast Search Algorithm for Vector Quantization (벡터 양자화를 위한 고속 탐색 알고리듬에 관한 연구)

  • 지상현;김용석;이남일;강상원
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.4
    • /
    • pp.293-298
    • /
    • 2003
  • In this paper. we propose a fast search algorithm for nearest neighbor vector quantization (NNVQ). The proposed algorithm rejects those codewords which can not be the nearest codeword and reduces the search range of codebook. Hence it reduces computational time and complexity in encoding process, while it provides the same SD performance as the conventional full search algorithm. We apply the proposed algorithm to the adaptive multi-rate (AMR) speech coder and a general vector quantizer designed by LBG. algorithm. Simulation results show effectiveness of the proposed algorithm.

A KD-Tree-Based Nearest Neighbor Search for Large Quantities of Data

  • Yen, Shwu-Huey;Hsieh, Ya-Ju
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.459-470
    • /
    • 2013
  • The discovery of nearest neighbors, without training in advance, has many applications, such as the formation of mosaic images, image matching, image retrieval and image stitching. When the quantity of data is huge and the number of dimensions is high, the efficient identification of a nearest neighbor (NN) is very important. This study proposes a variation of the KD-tree - the arbitrary KD-tree (KDA) - which is constructed without the need to evaluate variances. Multiple KDAs can be constructed efficiently and possess independent tree structures, when the amount of data is large. Upon testing, using extended synthetic databases and real-world SIFT data, this study concludes that the KDA method increases computational efficiency and produces satisfactory accuracy, when solving NN problems.

Flexible Nearest Neighbor Search for Grouping kNN (그룹핑 k-NN을 위한 유연한 최근접 객체 검색)

  • Song, Doohee;Park, Kwangjin
    • Annual Conference of KIPS
    • /
    • 2015.10a
    • /
    • pp.469-470
    • /
    • 2015
  • 우리는 그룹핑 k-최근접 (Grouping k Nearest Neighbor; GkNN)질의를 지원하기 위하여 유연한 최근접객체(Flexible Nearest Neighbor; FNN)검색 방법을 제안한다. GkNN이란 기존에 제안된 kNN과 다르게 질의자가 요청한 k개의 객체를 모두 확인한 후에 이동 경로의 총합이 가장 작은 k개의 객체를 검색하는 방법이다. 기존 연구에서 제안된 최근접 객체들 (Nearest Neighborhood; NNH) 또한 이 문제를 해결하기 위하여 제안되었다. 그러나 NNH의 문제점은 객체 k와 p가 고정되어 있기 때문에 이동 환경에서 q에서 C까지의 거리가 증가하는 것이다. FNN의 환경은 NNH의 환경과 유사하다. 우리는 NNH의 q에서 집합 C 중 거리 중 가장 짧은 $c_i$ 선택한 후 q에서 $c_i$에 포함된 객체들 모두 검색하는 이동 경로의 총합과 FNN의 이동경로의 총 합을 비교하여 NNH의 문제점을 해결하였다.

Locality-Sensitive Hashing for Data with Categorical and Numerical Attributes Using Dual Hashing

  • Lee, Keon Myung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.98-104
    • /
    • 2014
  • Locality-sensitive hashing techniques have been developed to efficiently handle nearest neighbor searches and similar pair identification problems for large volumes of high-dimensional data. This study proposes a locality-sensitive hashing method that can be applied to nearest neighbor search problems for data sets containing both numerical and categorical attributes. The proposed method makes use of dual hashing functions, where one function is dedicated to numerical attributes and the other to categorical attributes. The method consists of creating indexing structures for each of the dual hashing functions, gathering and combining the candidates sets, and thoroughly examining them to determine the nearest ones. The proposed method is examined for a few synthetic data sets, and results show that it improves performance in cases of large amounts of data with both numerical and categorical attributes.

An Efficient Multidimensional Index Structure for Parallel Environments

  • Bok Koung-Soo;Song Seok-Il;Yoo Jae-Soo
    • International Journal of Contents
    • /
    • v.1 no.1
    • /
    • pp.50-58
    • /
    • 2005
  • Generally, multidimensional data such as image and spatial data require large amount of storage space. There is a limit to store and manage those large amounts of data in single workstation. If we manage the data on parallel computing environment which is being actively researched these days, we can get highly improved performance. In this paper, we propose a parallel multidimensional index structure that exploits the parallelism of the parallel computing environment. The proposed index structure is nP(processor)-nxmD(disk) architecture which is the hybrid type of nP-nD and 1P-nD. Its node structure in-creases fan-out and reduces the height of an index. Also, a range search algorithm that maximizes I/O parallelism is devised, and it is applied to k-nearest neighbor queries. Through various experiments, it is shown that the proposed method outperforms other parallel index structures.

  • PDF

Hangul Recognition Using a Hierarchical Neural Network (계층구조 신경망을 이용한 한글 인식)

  • 최동혁;류성원;강현철;박규태
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.11
    • /
    • pp.852-858
    • /
    • 1991
  • An adaptive hierarchical classifier(AHCL) for Korean character recognition using a neural net is designed. This classifier has two neural nets: USACL (Unsupervised Adaptive Classifier) and SACL (Supervised Adaptive Classifier). USACL has the input layer and the output layer. The input layer and the output layer are fully connected. The nodes in the output layer are generated by the unsupervised and nearest neighbor learning rule during learning. SACL has the input layer, the hidden layer and the output layer. The input layer and the hidden layer arefully connected, and the hidden layer and the output layer are partially connected. The nodes in the SACL are generated by the supervised and nearest neighbor learning rule during learning. USACL has pre-attentive effect, which perform partial search instead of full search during SACL classification to enhance processing speed. The input of USACL and SACL is a directional edge feature with a directional receptive field. In order to test the performance of the AHCL, various multi-font printed Hangul characters are used in learning and testing, and its processing its speed and and classification rate are compared with the conventional LVQ(Learning Vector Quantizer) which has the nearest neighbor learning rule.

  • PDF

k-Nearest Neighbor Query Processing in Multi-Dimensional Indexing Structures (다차원 인덱싱 구조에서의 k-근접객체질의 처리 방안)

  • Kim Byung Gon;Oh Sung Kyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.1 s.33
    • /
    • pp.85-92
    • /
    • 2005
  • Recently, query processing techniques for the multi-dimensional data like images have been widely used to perform content-based retrieval of the data . Range query and Nearest neighbor query are widely used multi dimensional queries . This paper Proposes the efficient pruning strategies for k-nearest neighbor query in R-tree variants indexing structures. Pruning strategy is important for the multi-dimensional indexing query processing so that search space can be reduced. We analyzed the Pruning strategies and perform experiments to show overhead and the profit of the strategies. Finally, we propose best use of the strategies.

  • PDF

Feature Selection for Multiple K-Nearest Neighbor classifiers using GAVaPS (GAVaPS를 이용한 다수 K-Nearest Neighbor classifier들의 Feature 선택)

  • Lee, Hee-Sung;Lee, Jae-Hun;Kim, Eun-Tai
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.6
    • /
    • pp.871-875
    • /
    • 2008
  • This paper deals with the feature selection for multiple k-nearest neighbor (k-NN) classifiers using Genetic Algorithm with Varying reputation Size (GAVaPS). Because we use multiple k-NN classifiers, the feature selection problem for them is vary hard and has large search region. To solve this problem, we employ the GAVaPS which outperforms comparison with simple genetic algorithm (SGA). Further, we propose the efficient combining method for multiple k-NN classifiers using GAVaPS. Experiments are performed to demonstrate the efficiency of the proposed method.

A Fast Fractal Image Compression Using The Normalized Variance (정규화된 분산을 이용한 프랙탈 압축방법)

  • Kim, Jong-Koo;Hamn, Do-Yong;Wee, Young-Cheul;Kimn, Ha-Jine
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.499-502
    • /
    • 2001
  • Fractal image coding suffers from the long search time of domain pool although it provides many properties including the high compression ratio. We find that the normalized variance of a block is independent of contrast, brightness. Using this observation, we introduce a self similar block searching method employing the d-dimensional nearest neighbor searching. This method takes Ο(log/N) time for searching the self similar domain blocks for each range block where N is the number of domain blocks. PSNR (Peak Signal Noise Ratio) of this method is similar to that of the full search method that requires Ο(N) time for each range block. Moreover, the image quality of this method is independent of the number of edges in the image.

  • PDF