• Title/Summary/Keyword: Feature-based image matching

Search Result 338, Processing Time 0.034 seconds

Automatic Registration between EO and IR Images of KOMPSAT-3A Using Block-based Image Matching

  • Kang, Hyungseok
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.4
    • /
    • pp.545-555
    • /
    • 2020
  • This paper focuses on automatic image registration between EO (Electro-Optical) and IR (InfraRed) satellite images with different spectral properties using block-based approach and simple preprocessing technique to enhance the performance of feature matching. If unpreprocessed EO and IR images from Kompsat-3A satellite were applied to local feature matching algorithms(Scale Invariant Feature Transform, Speed-Up Robust Feature, etc.), image registration algorithm generally failed because of few detected feature points or mismatched pairs despite of many detected feature points. In this paper, we proposed a new image registration method which improved the performance of feature matching with block-based registration process on 9-divided image and pre-processing technique based on adaptive histogram equalization. The proposed method showed better performance than without our proposed technique on visual inspection and I-RMSE. This study can be used for automatic image registration between various images acquired from different sensors.

Enhancement of Stereo Feature Matching using Feature Windows and Feature Links (특징창과 특징링크를 이용한 스테레오 특징점의 정합 성능 향상)

  • Kim, Chang-Il;Park, Soon-Yong
    • The KIPS Transactions:PartB
    • /
    • v.19B no.2
    • /
    • pp.113-122
    • /
    • 2012
  • This paper presents a new stereo matching technique which is based on the matching of feature windows and feature links. The proposed method uses the FAST feature detector to find image features in stereo images and determines the correspondences of the detected features in the stereo images. We define a feature window which is an image region containing several image features. The proposed technique consists of two matching steps. First, a feature window is defined in a standard image and its correspondence is found in a reference image. Second, the corresponding features between the matched windows are determined by using the feature link technique. If there is no correspondence for an image feature in the standard image, it's disparity is interpolated by neighboring feature sets. We evaluate the accuracy of the proposed technique by comparing our results with the ground truth of in a stereo image database. We also compare the matching accuracy and computation time with two conventional feature-based stereo matching techniques.

CNN-based Opti-Acoustic Transformation for Underwater Feature Matching (수중에서의 특징점 매칭을 위한 CNN기반 Opti-Acoustic변환)

  • Jang, Hyesu;Lee, Yeongjun;Kim, Giseop;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.1-7
    • /
    • 2020
  • In this paper, we introduce the methodology that utilizes deep learning-based front-end to enhance underwater feature matching. Both optical camera and sonar are widely applicable sensors in underwater research, however, each sensor has its own weaknesses, such as light condition and turbidity for the optic camera, and noise for sonar. To overcome the problems, we proposed the opti-acoustic transformation method. Since feature detection in sonar image is challenging, we converted the sonar image to an optic style image. Maintaining the main contents in the sonar image, CNN-based style transfer method changed the style of the image that facilitates feature detection. Finally, we verified our result using cosine similarity comparison and feature matching against the original optic image.

A Fast Image Matching Method for Oblique Video Captured with UAV Platform

  • Byun, Young Gi;Kim, Dae Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.2
    • /
    • pp.165-172
    • /
    • 2020
  • There is growing interest in Vision-based video image matching owing to the constantly developing technology of unmanned-based systems. The purpose of this paper is the development of a fast and effective matching technique for the UAV oblique video image. We first extracted initial matching points using NCC (Normalized Cross-Correlation) algorithm and improved the computational efficiency of NCC algorithm using integral image. Furthermore, we developed a triangulation-based outlier removal algorithm to extract more robust matching points among the initial matching points. In order to evaluate the performance of the propose method, our method was quantitatively compared with existing image matching approaches. Experimental results demonstrated that the proposed method can process 2.57 frames per second for video image matching and is up to 4 times faster than existing methods. The proposed method therefore has a good potential for the various video-based applications that requires image matching as a pre-processing.

GPU-Based Optimization of Self-Organizing Map Feature Matching for Real-Time Stereo Vision

  • Sharma, Kajal;Saifullah, Saifullah;Moon, Inkyu
    • Journal of information and communication convergence engineering
    • /
    • v.12 no.2
    • /
    • pp.128-134
    • /
    • 2014
  • In this paper, we present a graphics processing unit (GPU)-based matching technique for the purpose of fast feature matching between different images. The scale invariant feature transform algorithm developed by Lowe for various feature matching applications, such as stereo vision and object recognition, is computationally intensive. To address this problem, we propose a matching technique optimized for GPUs to perform computations in less time. We optimize GPUs for fast computation of keypoints to make our system quick and efficient. The proposed method uses a self-organizing map feature matching technique to perform efficient matching between the different images. The experiments are performed on various image sets to examine the performance of the system under varying conditions, such as image rotation, scaling, and blurring. The experimental results show that the proposed algorithm outperforms the existing feature matching methods, resulting in fast feature matching due to the optimization of the GPU.

Multiple Vehicle Detection and Tracking in Highway Traffic Surveillance Video Based on SIFT Feature Matching

  • Mu, Kenan;Hui, Fei;Zhao, Xiangmo
    • Journal of Information Processing Systems
    • /
    • v.12 no.2
    • /
    • pp.183-195
    • /
    • 2016
  • This paper presents a complete method for vehicle detection and tracking in a fixed setting based on computer vision. Vehicle detection is performed based on Scale Invariant Feature Transform (SIFT) feature matching. With SIFT feature detection and matching, the geometrical relations between the two images is estimated. Then, the previous image is aligned with the current image so that moving vehicles can be detected by analyzing the difference image of the two aligned images. Vehicle tracking is also performed based on SIFT feature matching. For the decreasing of time consumption and maintaining higher tracking accuracy, the detected candidate vehicle in the current image is matched with the vehicle sample in the tracking sample set, which contains all of the detected vehicles in previous images. Most remarkably, the management of vehicle entries and exits is realized based on SIFT feature matching with an efficient update mechanism of the tracking sample set. This entire method is proposed for highway traffic environment where there are no non-automotive vehicles or pedestrians, as these would interfere with the results.

CLASSIFIED ELGEN BLOCK: LOCAL FEATURE EXTRACTION AND IMAGE MATCHING ALGORITHM

  • Hochul Shin;Kim, Seong-Dae
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2108-2111
    • /
    • 2003
  • This paper introduces a new local feature extraction method and image matching method for the localization and classification of targets. Proposed method is based on the block-by-block projection associated with directional pattern of blocks. Each pattern has its own eigen-vertors called as CEBs(Classified Eigen-Blocks). Also proposed block-based image matching method is robust to translation and occlusion. Performance of proposed feature extraction and matching method is verified by the face localization and FLIR-vehicle-image classification test.

  • PDF

Study of Feature Based Algorithm Performance Comparison for Image Matching between Virtual Texture Image and Real Image (가상 텍스쳐 영상과 실촬영 영상간 매칭을 위한 특징점 기반 알고리즘 성능 비교 연구)

  • Lee, Yoo Jin;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1057-1068
    • /
    • 2022
  • This paper compares the combination performance of feature point-based matching algorithms as a study to confirm the matching possibility between image taken by a user and a virtual texture image with the goal of developing mobile-based real-time image positioning technology. The feature based matching algorithm includes process of extracting features, calculating descriptors, matching features from both images, and finally eliminating mismatched features. At this time, for matching algorithm combination, we combined the process of extracting features and the process of calculating descriptors in the same or different matching algorithm respectively. V-World 3D desktop was used for the virtual indoor texture image. Currently, V-World 3D desktop is reinforced with details such as vertical and horizontal protrusions and dents. In addition, levels with real image textures. Using this, we constructed dataset with virtual indoor texture data as a reference image, and real image shooting at the same location as a target image. After constructing dataset, matching success rate and matching processing time were measured, and based on this, matching algorithm combination was determined for matching real image with virtual image. In this study, based on the characteristics of each matching technique, the matching algorithm was combined and applied to the constructed dataset to confirm the applicability, and performance comparison was also performed when the rotation was additionally considered. As a result of study, it was confirmed that the combination of Scale Invariant Feature Transform (SIFT)'s feature and descriptor detection had the highest matching success rate, but matching processing time was longest. And in the case of Features from Accelerated Segment Test (FAST)'s feature detector and Oriented FAST and Rotated BRIEF (ORB)'s descriptor calculation, the matching success rate was similar to that of SIFT-SIFT combination, while matching processing time was short. Furthermore, in case of FAST-ORB, it was confirmed that the matching performance was superior even when 10° rotation was applied to the dataset. Therefore, it was confirmed that the matching algorithm of FAST-ORB combination could be suitable for matching between virtual texture image and real image.

Speed-up of Image Matching Using Feature Strength Information (특징 강도 정보를 이용한 영상 정합 속도 향상)

  • Kim, Tae-Woo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.6
    • /
    • pp.63-69
    • /
    • 2013
  • A feature-based image recognition method, using features of an object, can be performed faster than a template matching technique. Invariant feature-based panoramic image generation, an application of image recognition, requires large amount of time to match features between two images. This paper proposes a speed-up method of feature matching using feature strength information. Our algorithm extracts features in images, computes their feature strength information, and selects strong features points which are used to match the selected features. The strong features can be referred to as meaningful ones than the weak features. In the experiments, it was shown that our method speeded up over 40% of processing time than the technique without using feature strength information.

Panoramic Image Stitching Using Feature Extracting and Matching on Embedded System

  • Lee, June-Hwan
    • Transactions on Electrical and Electronic Materials
    • /
    • v.18 no.5
    • /
    • pp.273-278
    • /
    • 2017
  • Recently, one of the areas where research is being actively conducted is the Internet of Things (IoT). The field of using the Internet of Things system is increasing, coupled with a remarkable increase of the use of the camera. However, general cameras used in the Internet of Things have limited viewing angles as compared to those available to the human eye. Also, cameras restrict observation of objects and the performance of observation. Therefore, in this paper, we propose a panoramic image stitching method using feature extraction and matching based on an embedded system. After extracting the feature of the image, the speed of image stitching is improved by reducing the amount of computation using the necessary information so that it can be used in the embedded system. Experimental results show that it is possible to improve the speed of feature matching and panoramic image stitching while generating a smooth image.