• Title/Summary/Keyword: Scale-Invariant Features

Search Result 116, Processing Time 0.033 seconds

Illumination invariant image matching using histogram equalization (히스토그램 평활화를 이용한 조명변화에 강인한 영상 매칭)

  • Oh, Changbeom;Kang, Minsung;Sohn, Kwanghoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.11a
    • /
    • pp.161-164
    • /
    • 2011
  • 영상 매칭은 컴퓨터 비전에서 기초적인 기술로써 영상 추적, 물체인식 등 다양한 분양에서 많이 사용되고 있다. 하지만 스케일, 시점변화, 조명 변화에 강인한 매칭점을 찾는 것은 어려운 일이다. 이러한 문제점을 보완하기 위해 SURF(Scale Invariant Feature Transform), SIFT(Speed up Robust Features) 등의 알고리즘이 제안 되었지만, 여전히 조명변화에 불안정하고 정확하지 못한 성능을 보인다. 본 논문에서는 이러한 조명변화에 대한 문제점을 해결하기 위해 히스토그램 평활화를 이용하여 영상을 보정 후, SURF를 통한 영상 매칭을 하였다. 열악한 조명환경 내에서 촬영된 영상에서 SURF를 이용하여 표현자(Descriptor)를 생성 할 때 특징점이 잘 추출되지 않는 문제점을 해결하기 위하여 히스토그램 평활화를 이용하였고, 보정 후 특징점 개수가 많이 증가하는 것을 보여 확인하였다. 기존의 SURF와 개량된 SURF를 조명이 서로 다른 영상간의 매칭 성능을 비교함으로써 제안한 알고리즘의 우수성을 확인하였다

  • PDF

Aerial scene matching using linear features (선형특징을 사용한 항공영상의 정합)

  • 정재훈;박영태
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.689-692
    • /
    • 1998
  • Matching two images is an essential step for many computer vision applications. A new approach to the scale and rotation invariant scene matching is presented. A set of andidate parameters are hypthesized by mapping the angular difference and a new distance measure to the hough space and by detecting maximally consistent points. The proposed method is shown to be much faster than the conventinal one where the relaxation process is repeated until convergence, while providing robust matching performance, without a priori information on the geometrical transformation parameters.

  • PDF

A New Three-dimensional Integrated Multi-index Method for CBIR System

  • Zhang, Mingzhu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.993-1014
    • /
    • 2021
  • This paper proposes a new image retrieval method called the 3D integrated multi-index to fuse SIFT (Scale Invariant Feature Transform) visual words with other features at the indexing level. The advantage of the 3D integrated multi-index is that it can produce finer subdivisions in the search space. Compared with the inverted indices of medium-sized codebook, the proposed method increases time slightly in preprocessing and querying. Particularly, the SIFT, contour and colour features are fused into the integrated multi-index, and the joint cooperation of complementary features significantly reduces the impact of false positive matches, so that effective image retrieval can be achieved. Extensive experiments on five benchmark datasets show that the 3D integrated multi-index significantly improves the retrieval accuracy. While compared with other methods, it requires an acceptable memory usage and query time. Importantly, we show that the 3D integrated multi-index is well complementary to many prior techniques, which make our method compared favorably with the state-of-the-arts.

Pedestrian Classification using CNN's Deep Features and Transfer Learning (CNN의 깊은 특징과 전이학습을 사용한 보행자 분류)

  • Chung, Soyoung;Chung, Min Gyo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.4
    • /
    • pp.91-102
    • /
    • 2019
  • In autonomous driving systems, the ability to classify pedestrians in images captured by cameras is very important for pedestrian safety. In the past, after extracting features of pedestrians with HOG(Histogram of Oriented Gradients) or SIFT(Scale-Invariant Feature Transform), people classified them using SVM(Support Vector Machine). However, extracting pedestrian characteristics in such a handcrafted manner has many limitations. Therefore, this paper proposes a method to classify pedestrians reliably and effectively using CNN's(Convolutional Neural Network) deep features and transfer learning. We have experimented with both the fixed feature extractor and the fine-tuning methods, which are two representative transfer learning techniques. Particularly, in the fine-tuning method, we have added a new scheme, called M-Fine(Modified Fine-tuning), which divideslayers into transferred parts and non-transferred parts in three different sizes, and adjusts weights only for layers belonging to non-transferred parts. Experiments on INRIA Person data set with five CNN models(VGGNet, DenseNet, Inception V3, Xception, and MobileNet) showed that CNN's deep features perform better than handcrafted features such as HOG and SIFT, and that the accuracy of Xception (threshold = 0.5) isthe highest at 99.61%. MobileNet, which achieved similar performance to Xception and learned 80% fewer parameters, was the best in terms of efficiency. Among the three transfer learning schemes tested above, the performance of the fine-tuning method was the best. The performance of the M-Fine method was comparable to or slightly lower than that of the fine-tuningmethod, but higher than that of the fixed feature extractor method.

Arctic Sea Ice Motion Measurement Using Time-Series High-Resolution Optical Satellite Images and Feature Tracking Techniques (고해상도 시계열 광학 위성 영상과 특징점 추적 기법을 이용한 북극해 해빙 이동 탐지)

  • Hyun, Chang-Uk;Kim, Hyun-cheol
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_2
    • /
    • pp.1215-1227
    • /
    • 2018
  • Sea ice motion is an important factor for assessing change of sea ice because the motion affects to not only regional distribution of sea ice but also new ice growth and thickness of ice. This study presents an application of multi-temporal high-resolution optical satellites images obtained from Korea Multi-Purpose Satellite-2 (KOMPSAT-2) and Korea Multi-Purpose Satellite-3 (KOMPSAT-3) to measure sea ice motion using SIFT (Scale-Invariant Feature Transform), SURF (Speeded Up Robust Features) and ORB (Oriented FAST and Rotated BRIEF) feature tracking techniques. In order to use satellite images from two different sensors, spatial and radiometric resolution were adjusted during pre-processing steps, and then the feature tracking techniques were applied to the pre-processed images. The matched features extracted from the SIFT showed even distribution across whole image, however the matched features extracted from the SURF showed condensed distribution of features around boundary between ice and ocean, and this regionally biased distribution became more prominent in the matched features extracted from the ORB. The processing time of the feature tracking was decreased in order of SIFT, SURF and ORB techniques. Although number of the matched features from the ORB was decreased as 59.8% compared with the result from the SIFT, the processing time was decreased as 8.7% compared with the result from the SIFT, therefore the ORB technique is more suitable for fast measurement of sea ice motion.

Automatic Target Recognition by selecting similarity-transform-invariant local and global features (유사변환에 불변인 국부적 특징과 광역적 특징 선택에 의한 자동 표적인식)

  • Sun, Sun-Gu;Park, Hyun-Wook
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.370-380
    • /
    • 2002
  • This paper proposes an ATR (Automatic Target Recognition) algorithm for identifying non-occluded and occluded military vehicles in natural FLIR (Forward Looking InfraRed) images. After segmenting a target, a radial function is defined from the target boundary to extract global shape features. Also, to extract local shape features of upper region of a target, a distance function is defined from boundary points and a line between two extreme points. From two functions and target contour, four global and four local shape features are proposed. They are much more invariant to translation, rotation and scale transform than traditional feature sets. In the experiments, we show that the proposed feature set is superior to the traditional feature sets with respect to the similarity-transform invariance and recognition performance.

Invariant Image Matching using Linear Features (선형특징을 사용한 불변 영상정합 기법)

  • Park, Se-Je;Park, Young-Tae
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.12
    • /
    • pp.55-62
    • /
    • 1998
  • Matching two images is an essential step for many computer vision applications. A new approach to the scale and rotation invariant scene matching, using linear features, is presented. Scene or model images are described by a set of linear features approximating edge information, which can be obtained by the conventional edge detection, thinning, and piecewise linear approximation. A set of candidate parameters are hypothesized by mapping the angular difference and a new distance measure to the Hough space and by detecting maximally consistent points. These hypotheses are verified by a fast linear feature matching algorithm composed of a single-step relaxation and a Hough technique. The proposed method is shown to be much faster than the conventional one where the relaxation process is repeated until convergence, while providing matching performance robust to the random alteration of the linear features, without a priori information on the geometrical transformation parameters.

  • PDF

Performance Comparison and Analysis between Keypoints Extraction Algorithms using Drone Images (드론 영상을 이용한 특징점 추출 알고리즘 간의 성능 비교)

  • Lee, Chung Ho;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.2
    • /
    • pp.79-89
    • /
    • 2022
  • Images taken using drones have been applied to fields that require rapid decision-making as they can quickly construct high-quality 3D spatial information for small regions. To construct spatial information based on drone images, it is necessary to determine the relationship between images by extracting keypoints between adjacent drone images and performing image matching. Therefore, in this study, three study regions photographed using a drone were selected: a region where parking lots and a lake coexisted, a downtown region with buildings, and a field region of natural terrain, and the performance of AKAZE (Accelerated-KAZE), BRISK (Binary Robust Invariant Scalable Keypoints), KAZE, ORB (Oriented FAST and Rotated BRIEF), SIFT (Scale Invariant Feature Transform), and SURF (Speeded Up Robust Features) algorithms were analyzed. The performance of the keypoints extraction algorithms was compared with the distribution of extracted keypoints, distribution of matched points, processing time, and matching accuracy. In the region where the parking lot and lake coexist, the processing speed of the BRISK algorithm was fast, and the SURF algorithm showed excellent performance in the distribution of keypoints and matched points and matching accuracy. In the downtown region with buildings, the processing speed of the AKAZE algorithm was fast and the SURF algorithm showed excellent performance in the distribution of keypoints and matched points and matching accuracy. In the field region of natural terrain, the keypoints and matched points of the SURF algorithm were evenly distributed throughout the image taken by drone, but the AKAZE algorithm showed the highest matching accuracy and processing speed.

Feature Matching Algorithm Robust To Viewpoint Change (시점 변화에 강인한 특징점 정합 기법)

  • Jung, Hyun-jo;Yoo, Ji-sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.12
    • /
    • pp.2363-2371
    • /
    • 2015
  • In this paper, we propose a new feature matching algorithm which is robust to the viewpoint change by using the FAST(Features from Accelerated Segment Test) feature detector and the SIFT(Scale Invariant Feature Transform) feature descriptor. The original FAST algorithm unnecessarily results in many feature points along the edges in the image. To solve this problem, we apply the principal curvatures for refining it. We use the SIFT descriptor to describe the extracted feature points and calculate the homography matrix through the RANSAC(RANdom SAmple Consensus) with the matching pairs obtained from the two different viewpoint images. To make feature matching robust to the viewpoint change, we classify the matching pairs by calculating the Euclidean distance between the transformed coordinates by the homography transformation with feature points in the reference image and the coordinates of the feature points in the different viewpoint image. Through the experimental results, it is shown that the proposed algorithm has better performance than the conventional feature matching algorithms even though it has much less computational load.

Automatic Registration Method for EO/IR Satellite Image Using Modified SIFT and Block-Processing (Modified SIFT와 블록프로세싱을 이용한 적외선과 광학 위성영상의 자동정합기법)

  • Lee, Kang-Hoon;Choi, Tae-Sun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.4 no.3
    • /
    • pp.174-181
    • /
    • 2011
  • A new registration method for IR image and EO image is proposed in this paper. IR sensor is applicable to many area because it absorbs thermal radiation energy unlike EO sensor does. However, IR sensor has difficulty to extract and match features due to low contrast compared to EO image. In order to register both images, we used modified SIFT(Scale Invariant Feature Transform) and block processing to increase feature distinctiveness. To remove outlier, we applied RANSAC(RANdom SAample Concensus) for each block. Finally, we unified matching features into single coordinate system and remove outlier again. We used 3~5um range IR image, and our experiment result showed good robustness in registration with IR image.