• 제목/요약/키워드: Feature Image Matching

검색결과 550건 처리시간 0.024초

특징창과 특징링크를 이용한 스테레오 특징점의 정합 성능 향상 (Enhancement of Stereo Feature Matching using Feature Windows and Feature Links)

  • 김창일;박순용
    • 정보처리학회논문지B
    • /
    • 제19B권2호
    • /
    • pp.113-122
    • /
    • 2012
  • 스테레오 정합(stereo matching) 기술은 주어진 두 영상에서 동일한 물체의 영상점이 어떤 위치 관계를 가지고 있는지를 결정하는 기술이다. 본 논문에서는 영상 특징점에 대해 스테레오 위치관계를 결정하는 새로운 스테레오 특징점 정합(stereo feature matching) 방법을 제시한다. 제안하는 방법은 주어진 스테레오 영상에서 FAST 추출기를 이용하여 특징점을 추출하고, 특징점 벡터들의 정보들을 내부에 포함하는 특징창(feature window)이라는 공간을 정의하여 스테레오 정합의 성능을 향상한다. 제안하는 방법은 표준 영상에 추출된 특징점들에 대해 특징창을 생성하고, 참조 영상에서 표준 영상의 특징창과 가장 유사한 특징창을 탐색 및 결정한 다음, 결정된 두 개의 특징창 내부의 특징점들의 시차관계는 특징링크(feature link)를 생성하여 시차를 결정한다. 만약, 이 과정에서 시차가 결정되지 않은 특징점들이 있다면, 특징창 내의 결정된 시차 정보를 이용하여 시차 값을 보간한다. 마지막으로, 제안하는 방법의 성능을 검증하기 위해 결과 영상과 정답 영상의 시차를 비교하여 정합 정확성과 수행시간을 비교하였다. 또한, 기존의 특징점 기반 스테레오 정합 방법들과 제안하는 방법의 성능을 비교 및 분석하였다.

Automatic Registration between EO and IR Images of KOMPSAT-3A Using Block-based Image Matching

  • Kang, Hyungseok
    • 대한원격탐사학회지
    • /
    • 제36권4호
    • /
    • pp.545-555
    • /
    • 2020
  • This paper focuses on automatic image registration between EO (Electro-Optical) and IR (InfraRed) satellite images with different spectral properties using block-based approach and simple preprocessing technique to enhance the performance of feature matching. If unpreprocessed EO and IR images from Kompsat-3A satellite were applied to local feature matching algorithms(Scale Invariant Feature Transform, Speed-Up Robust Feature, etc.), image registration algorithm generally failed because of few detected feature points or mismatched pairs despite of many detected feature points. In this paper, we proposed a new image registration method which improved the performance of feature matching with block-based registration process on 9-divided image and pre-processing technique based on adaptive histogram equalization. The proposed method showed better performance than without our proposed technique on visual inspection and I-RMSE. This study can be used for automatic image registration between various images acquired from different sensors.

Panoramic Image Stitching Using Feature Extracting and Matching on Embedded System

  • Lee, June-Hwan
    • Transactions on Electrical and Electronic Materials
    • /
    • 제18권5호
    • /
    • pp.273-278
    • /
    • 2017
  • Recently, one of the areas where research is being actively conducted is the Internet of Things (IoT). The field of using the Internet of Things system is increasing, coupled with a remarkable increase of the use of the camera. However, general cameras used in the Internet of Things have limited viewing angles as compared to those available to the human eye. Also, cameras restrict observation of objects and the performance of observation. Therefore, in this paper, we propose a panoramic image stitching method using feature extraction and matching based on an embedded system. After extracting the feature of the image, the speed of image stitching is improved by reducing the amount of computation using the necessary information so that it can be used in the embedded system. Experimental results show that it is possible to improve the speed of feature matching and panoramic image stitching while generating a smooth image.

GPU-Based Optimization of Self-Organizing Map Feature Matching for Real-Time Stereo Vision

  • Sharma, Kajal;Saifullah, Saifullah;Moon, Inkyu
    • Journal of information and communication convergence engineering
    • /
    • 제12권2호
    • /
    • pp.128-134
    • /
    • 2014
  • In this paper, we present a graphics processing unit (GPU)-based matching technique for the purpose of fast feature matching between different images. The scale invariant feature transform algorithm developed by Lowe for various feature matching applications, such as stereo vision and object recognition, is computationally intensive. To address this problem, we propose a matching technique optimized for GPUs to perform computations in less time. We optimize GPUs for fast computation of keypoints to make our system quick and efficient. The proposed method uses a self-organizing map feature matching technique to perform efficient matching between the different images. The experiments are performed on various image sets to examine the performance of the system under varying conditions, such as image rotation, scaling, and blurring. The experimental results show that the proposed algorithm outperforms the existing feature matching methods, resulting in fast feature matching due to the optimization of the GPU.

수중에서의 특징점 매칭을 위한 CNN기반 Opti-Acoustic변환 (CNN-based Opti-Acoustic Transformation for Underwater Feature Matching)

  • 장혜수;이영준;김기섭;김아영
    • 로봇학회논문지
    • /
    • 제15권1호
    • /
    • pp.1-7
    • /
    • 2020
  • In this paper, we introduce the methodology that utilizes deep learning-based front-end to enhance underwater feature matching. Both optical camera and sonar are widely applicable sensors in underwater research, however, each sensor has its own weaknesses, such as light condition and turbidity for the optic camera, and noise for sonar. To overcome the problems, we proposed the opti-acoustic transformation method. Since feature detection in sonar image is challenging, we converted the sonar image to an optic style image. Maintaining the main contents in the sonar image, CNN-based style transfer method changed the style of the image that facilitates feature detection. Finally, we verified our result using cosine similarity comparison and feature matching against the original optic image.

가상 텍스쳐 영상과 실촬영 영상간 매칭을 위한 특징점 기반 알고리즘 성능 비교 연구 (Study of Feature Based Algorithm Performance Comparison for Image Matching between Virtual Texture Image and Real Image)

  • 이유진;이수암
    • 대한원격탐사학회지
    • /
    • 제38권6_1호
    • /
    • pp.1057-1068
    • /
    • 2022
  • 본 논문은 모바일 기반의 실시간 영상 측위 기술 개발을 목표로 사용자가 촬영한 사진과 가상의 텍스쳐 영상 간의 매칭 가능성 확인 연구로 특징점 기반의 매칭 알고리즘의 조합 성능을 비교했다. 특징점 기반의 매칭 알고리즘은 특징점(feature)을 추출하는 과정과 추출된 특징점을 설명하는 서술자(descriptor)를 계산하는 과정, 최종적으로 서로 다른 영상에서 추출된 서술자를 매칭하고, 잘못 매칭된 특징점을 제거하는 과정으로 이루어진다. 이때 매칭 알고리즘 조합을 위해, 특징점을 추출하는 과정과 서술자를 계산하는 과정을 각각 같거나 다르게 조합하여 매칭 성능을 비교하였다. 가상 실내 텍스쳐 영상을 위해 V-World 3D 데스크탑을 활용하였다. 현재 V-World 3D 데스크톱에서는 수직·수평적 돌출부 및 함몰부와 같은 디테일이 보강되었다. 또한, 실제 영상 텍스쳐가 입혀진 레벨로 구축되어 있어, 이를 활용하여 가상 실내 텍스쳐 데이터를 기준영상으로 구성하고, 동일한 위치에서 직접 촬영하여 실험 데이터셋을 구성하였다. 데이터셋 구축 후, 매칭 알고리즘들로 매칭 성공률과 처리 시간을 측정하였고, 이를 바탕으로 매칭 성능 향상을 위해 매칭 알고리즘 조합을 결정하였다. 본 연구에서는 매칭 기법마다 가진 특장점을 기반으로 매칭 알고리즘을 조합하여 구축한 데이터셋에 적용해 적용 가능성을 확인하였고, 추가적으로 회전요소가 고려되었을 때의 성능 비교도 함께 수행하였다. 연구 결과, Scale Invariant Feature Transform (SIFT)의 feature와 descriptor 조합이 가장 매칭 성공률이 좋았지만 처리 소요 시간이 가장 큰 것을 확인할 수 있었고, Features from Accelerated Segment Test (FAST)의 feature와 Oriented FAST and Rotated BRIEF (ORB)의 descriptor 조합의 경우, SIFT-SIFT 조합과 유사한 매칭 성공률을 가지면서 처리 소요 시간도 우수하였다. 나아가, FAST-ORB의 경우, 10°의 회전이 데이터셋에 적용되었을 때에도 매칭 성능이 우세함을 확인하였다. 따라서 종합적으로 가상 텍스쳐 영상과 실영상간 매칭을 위해서 FAST-ORB 조합의 매칭 알고리즘이 적합한 것을 확인할 수 있었다.

CLASSIFIED ELGEN BLOCK: LOCAL FEATURE EXTRACTION AND IMAGE MATCHING ALGORITHM

  • Hochul Shin;Kim, Seong-Dae
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2108-2111
    • /
    • 2003
  • This paper introduces a new local feature extraction method and image matching method for the localization and classification of targets. Proposed method is based on the block-by-block projection associated with directional pattern of blocks. Each pattern has its own eigen-vertors called as CEBs(Classified Eigen-Blocks). Also proposed block-based image matching method is robust to translation and occlusion. Performance of proposed feature extraction and matching method is verified by the face localization and FLIR-vehicle-image classification test.

  • PDF

SIFT를 이용한 내시경 영상에서의 특징점 추출 (Feature Extraction for Endoscopic Image by using the Scale Invariant Feature Transform(SIFT))

  • 오장석;김호철;김형률;구자민;김민기
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.6-8
    • /
    • 2005
  • Study that uses geometrical information in computer vision is lively. Problem that should be preceded is matching problem before studying. Feature point should be extracted for well matching. There are a lot of methods that extract feature point from former days are studied. Because problem does not exist algorithm that is applied for all images, it is a hot water. Specially, it is not easy to find feature point in endoscope image. The big problem can not decide easily a point that is predicted feature point as can know even if see endoscope image as eyes. Also, accuracy of matching problem can be decided after number of feature points is enough and also distributed on whole image. In this paper studied algorithm that can apply to endoscope image. SIFT method displayed excellent performance when compared with alternative way (Affine invariant point detector etc.) in general image but SIFT parameter that used in general image can't apply to endoscope image. The gual of this paper is abstraction of feature point on endoscope image that controlled by contrast threshold and curvature threshold among the parameters for applying SIFT method on endoscope image. Studied about method that feature points can have good distribution and control number of feature point than traditional alternative way by controlling the parameters on experiment result.

  • PDF

Image Description and Matching Scheme Using Synthetic Features for Recommendation Service

  • Yang, Won-Keun;Cho, A-Young;Oh, Weon-Geun;Jeong, Dong-Seok
    • ETRI Journal
    • /
    • 제33권4호
    • /
    • pp.589-599
    • /
    • 2011
  • This paper presents an image description and matching scheme using synthetic features for a recommendation service. The recommendation service is an example of smart search because it offers something before a user's request. In the proposed extraction scheme, an image is described by synthesized spatial and statistical features. The spatial feature is designed to increase the discriminability by reflecting delicate variations. The statistical feature is designed to increase the robustness by absorbing small variations. For extracting spatial features, we partition the image into concentric circles and extract four characteristics using a spatial relation. To extract statistical features, we adapt three transforms into the image and compose a 3D histogram as the final statistical feature. The matching schemes are designed hierarchically using the proposed spatial and statistical features. The result shows that each feature is better than the compared algorithms that use spatial or statistical features. Additionally, if we adapt the proposed whole extraction and matching scheme, the overall performance will become 98.44% in terms of the correct search ratio.

A Fast Image Matching Method for Oblique Video Captured with UAV Platform

  • Byun, Young Gi;Kim, Dae Sung
    • 한국측량학회지
    • /
    • 제38권2호
    • /
    • pp.165-172
    • /
    • 2020
  • There is growing interest in Vision-based video image matching owing to the constantly developing technology of unmanned-based systems. The purpose of this paper is the development of a fast and effective matching technique for the UAV oblique video image. We first extracted initial matching points using NCC (Normalized Cross-Correlation) algorithm and improved the computational efficiency of NCC algorithm using integral image. Furthermore, we developed a triangulation-based outlier removal algorithm to extract more robust matching points among the initial matching points. In order to evaluate the performance of the propose method, our method was quantitatively compared with existing image matching approaches. Experimental results demonstrated that the proposed method can process 2.57 frames per second for video image matching and is up to 4 times faster than existing methods. The proposed method therefore has a good potential for the various video-based applications that requires image matching as a pre-processing.