• Title/Summary/Keyword: scale-invariant feature transform

Search Result 162, Processing Time 0.019 seconds

Feature Matching Algorithm Robust To Viewpoint Change (시점 변화에 강인한 특징점 정합 기법)

  • Jung, Hyun-jo;Yoo, Ji-sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.12
    • /
    • pp.2363-2371
    • /
    • 2015
  • In this paper, we propose a new feature matching algorithm which is robust to the viewpoint change by using the FAST(Features from Accelerated Segment Test) feature detector and the SIFT(Scale Invariant Feature Transform) feature descriptor. The original FAST algorithm unnecessarily results in many feature points along the edges in the image. To solve this problem, we apply the principal curvatures for refining it. We use the SIFT descriptor to describe the extracted feature points and calculate the homography matrix through the RANSAC(RANdom SAmple Consensus) with the matching pairs obtained from the two different viewpoint images. To make feature matching robust to the viewpoint change, we classify the matching pairs by calculating the Euclidean distance between the transformed coordinates by the homography transformation with feature points in the reference image and the coordinates of the feature points in the different viewpoint image. Through the experimental results, it is shown that the proposed algorithm has better performance than the conventional feature matching algorithms even though it has much less computational load.

Registration Method between High Resolution Optical and SAR Images (고해상도 광학영상과 SAR 영상 간 정합 기법)

  • Jeon, Hyeongju;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.5
    • /
    • pp.739-747
    • /
    • 2018
  • Integration analysis of multi-sensor satellite images is becoming increasingly important. The first step in integration analysis is image registration between multi-sensor. SIFT (Scale Invariant Feature Transform) is a representative image registration method. However, optical image and SAR (Synthetic Aperture Radar) images are different from sensor attitude and radiation characteristics during acquisition, making it difficult to apply the conventional method, such as SIFT, because the radiometric characteristics between images are nonlinear. To overcome this limitation, we proposed a modified method that combines the SAR-SIFT method and shape descriptor vector DLSS(Dense Local Self-Similarity). We conducted an experiment using two pairs of Cosmo-SkyMed and KOMPSAT-2 images collected over Daejeon, Korea, an area with a high density of buildings. The proposed method extracted the correct matching points when compared to conventional methods, such as SIFT and SAR-SIFT. The method also gave quantitatively reasonable results for RMSE of 1.66m and 2.45m over the two pairs of images.

ISAR Cross-Range Scaling for a Maneuvering Target (기동표적에 대한 ISAR Cross-Range Scaling)

  • Kang, Byung-Soo;Bae, Ji-Hoon;Kim, Kyung-Tae;Yang, Eun-Jung
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.25 no.10
    • /
    • pp.1062-1068
    • /
    • 2014
  • In this paper, a novel approach estimating target's rotation velocity(RV) is proposed for inverse synthetic aperture radar(ISAR) cross-range scaling(CRS). Scale invariant feature transform(SIFT) is applied to two sequently generated ISAR images for extracting non-fluctuating scatterers. Considering the fact that the distance between target's rotation center(RC) and SIFT features is same, we can set a criterion for estimating RV. Then, the criterion is optimized through the proposed method based on particle swarm optimization(PSO) combined with exhaustive search method. Simulation results show that the proposed algorithm can precisely estimate RV of a scenario based maneuvering target without RC information. With the use of the estimated RV, ISAR image can be correctly re-scaled along the cross-range direction.

Stitcing for Panorama based on SURF and Multi-band Blending (SURF와 멀티밴드 블렌딩에 기반한 파노라마 스티칭)

  • Luo, Juan;Shin, Sung-Sik;Park, Hyun-Ju;Gwun, Ou-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.2
    • /
    • pp.201-209
    • /
    • 2011
  • This paper suggests a panorama image stitching system which consists of an image matching algorithm: modified SURF (Speeded Up Robust Feature) and an image blending algorithm: multi-band blending. In this paper, first, Modified SURF is described and SURF is compared with SIFT (Scale Invariant Feature Transform), which also gives the reason why modified SURF is chosen instead of SIFT. Then, multi-band blending is described, Lastly, the structure of a panorama image stitching system is suggested and evaluated by experiments, which includes stitching quality test and time cost experiment. According to the experiments, the proposed system can make the stitching seam invisible and get a perfect panorama for large image data, In addition, it is faster than the sift based stitching system.

Feature-based Image Analysis for Object Recognition on Satellite Photograph (인공위성 영상의 객체인식을 위한 영상 특징 분석)

  • Lee, Seok-Jun;Jung, Soon-Ki
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.2
    • /
    • pp.35-43
    • /
    • 2007
  • This paper presents a system for image matching and recognition based on image feature detection and description techniques from artificial satellite photographs. We propose some kind of parameters from the varied environmental elements happen by image handling process. The essential point of this experiment is analyzes that affects match rate and recognition accuracy when to change of state of each parameter. The proposed system is basically inspired by Lowe's SIFT(Scale-Invariant Transform Feature) algorithm. The descriptors extracted from local affine invariant regions are saved into database, which are defined by k-means performed on the 128-dimensional descriptor vectors on an artificial satellite photographs from Google earth. And then, a label is attached to each cluster of the feature database and acts as guidance for an appeared building's information in the scene from camera. This experiment shows the various parameters and compares the affected results by changing parameters for the process of image matching and recognition. Finally, the implementation and the experimental results for several requests are shown.

  • PDF

A Study on Automatic Coregistration and Band Selection of Hyperion Hyperspectral Images for Change Detection (변화탐지를 위한 Hyperion 초분광 영상의 자동 기하보정과 밴드선택에 관한 연구)

  • Kim, Dae-Sung;Kim, Yong-Il;Eo, Yang-Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.5
    • /
    • pp.383-392
    • /
    • 2007
  • This study focuses on co-registration and band selection, which are one of the pre-processing steps to apply the change detection technique using hyperspectral images. We carried out automatic co-registration by using the SIFT algorithm which performance was already established in the computer vision fields, and selected the bands fur change detection by estimating the noise of image through the PIFs reflecting the radiometric consistency. The EM algorithm was also applied to select the band objectively. Hyperion images were used for the proposed techniques, and non-calibrated bands and striping noises contained in Hyperion image were removed. Throughout the results, we could develop the reliable co-registration procedure which coincided with accuracy within 0.2 pixels (RMSE) for change detection, and verified that band selection depending on the visual inspection could be objective by extracting the PIFs.

Affine Invariant Local Descriptors for Face Recognition (얼굴인식을 위한 어파인 불변 지역 서술자)

  • Gao, Yongbin;Lee, Hyo Jong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.9
    • /
    • pp.375-380
    • /
    • 2014
  • Under controlled environment, such as fixed viewpoints or consistent illumination, the performance of face recognition is usually high enough to be acceptable nowadays. Face recognition is, however, a still challenging task in real world. SIFT(Scale Invariant Feature Transformation) algorithm is scale and rotation invariant, which is powerful only in the case of small viewpoint changes. However, it often fails when viewpoint of faces changes in wide range. In this paper, we use Affine SIFT (Scale Invariant Feature Transformation; ASIFT) to detect affine invariant local descriptors for face recognition under wide viewpoint changes. The ASIFT is an extension of SIFT algorithm to solve this weakness. In our scheme, ASIFT is applied only to gallery face, while SIFT algorithm is applied to probe face. ASIFT generates a series of different viewpoints using affine transformation. Therefore, the ASIFT allows viewpoint differences between gallery face and probe face. Experiment results showed our framework achieved higher recognition accuracy than the original SIFT algorithm on FERET database.

Relative Localization for Mobile Robot using 3D Reconstruction of Scale-Invariant Features (스케일불변 특징의 삼차원 재구성을 통한 이동 로봇의 상대위치추정)

  • Kil, Se-Kee;Lee, Jong-Shill;Ryu, Je-Goon;Lee, Eung-Hyuk;Hong, Seung-Hong;Shen, Dong-Fan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.4
    • /
    • pp.173-180
    • /
    • 2006
  • A key component of autonomous navigation of intelligent home robot is localization and map building with recognized features from the environment. To validate this, accurate measurement of relative location between robot and features is essential. In this paper, we proposed relative localization algorithm based on 3D reconstruction of scale invariant features of two images which are captured from two parallel cameras. We captured two images from parallel cameras which are attached in front of robot and detect scale invariant features in each image using SIFT(scale invariant feature transform). Then, we performed matching for the two image's feature points and got the relative location using 3D reconstruction for the matched points. Stereo camera needs high precision of two camera's extrinsic and matching pixels in two camera image. Because we used two cameras which are different from stereo camera and scale invariant feature point and it's easy to setup the extrinsic parameter. Furthermore, 3D reconstruction does not need any other sensor. And the results can be simultaneously used by obstacle avoidance, map building and localization. We set 20cm the distance between two camera and capture the 3frames per second. The experimental results show :t6cm maximum error in the range of less than 2m and ${\pm}15cm$ maximum error in the range of between 2m and 4m.

Study of Methodology for Recognizing Multiple Objects (다중물체 인식 방법론에 관한 연구)

  • Lee, Hyun-Chang;Koh, Jin-Kwang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.7
    • /
    • pp.51-57
    • /
    • 2008
  • In recent computer vision or robotics fields, the research area of object recognition from image using low cost web camera or other video device is performed actively. As study for this, there are various methodologies suggested to retrieve objects in robotics and vision research areas. Also, robotics is designed and manufactured to aim at doing like human being. For instance, a person perceives apples as one see apples because of previously knowing the fact that it is apple in one's mind. Like this, robotics need to store the information of any object of what the robotics see. Therefore, in this paper, we propose an methodology that we can rapidly recognize objects which is stored in object database by using SIFT (scale invariant feature transform) algorithm to get information about the object. And then we implement the methodology to enable to recognize simultaneously multiple objects in an image.

  • PDF

Classification of Feature Points Required for Multi-Frame Based Building Recognition (멀티 프레임 기반 건물 인식에 필요한 특징점 분류)

  • Park, Si-young;An, Ha-eun;Lee, Gyu-cheol;Yoo, Ji-sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.3
    • /
    • pp.317-327
    • /
    • 2016
  • The extraction of significant feature points from a video is directly associated with the suggested method's function. In particular, the occlusion regions in trees or people, or feature points extracted from the background and not from objects such as the sky or mountains are insignificant and can become the cause of undermined matching or recognition function. This paper classifies the feature points required for building recognition by using multi-frames in order to improve the recognition function(algorithm). First, through SIFT(scale invariant feature transform), the primary feature points are extracted and the mismatching feature points are removed. To categorize the feature points in occlusion regions, RANSAC(random sample consensus) is applied. Since the classified feature points were acquired through the matching method, for one feature point there are multiple descriptors and therefore a process that compiles all of them is also suggested. Experiments have verified that the suggested method is competent in its algorithm.