• Title/Summary/Keyword: Local Invariant Feature

Search Result 74, Processing Time 0.029 seconds

Affine Invariant Local Descriptors for Face Recognition (얼굴인식을 위한 어파인 불변 지역 서술자)

  • Gao, Yongbin;Lee, Hyo Jong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.9
    • /
    • pp.375-380
    • /
    • 2014
  • Under controlled environment, such as fixed viewpoints or consistent illumination, the performance of face recognition is usually high enough to be acceptable nowadays. Face recognition is, however, a still challenging task in real world. SIFT(Scale Invariant Feature Transformation) algorithm is scale and rotation invariant, which is powerful only in the case of small viewpoint changes. However, it often fails when viewpoint of faces changes in wide range. In this paper, we use Affine SIFT (Scale Invariant Feature Transformation; ASIFT) to detect affine invariant local descriptors for face recognition under wide viewpoint changes. The ASIFT is an extension of SIFT algorithm to solve this weakness. In our scheme, ASIFT is applied only to gallery face, while SIFT algorithm is applied to probe face. ASIFT generates a series of different viewpoints using affine transformation. Therefore, the ASIFT allows viewpoint differences between gallery face and probe face. Experiment results showed our framework achieved higher recognition accuracy than the original SIFT algorithm on FERET database.

Object Recogniton for Markerless Augmented Reality Embodiment (마커 없는 증강 현실 구현을 위한 물체인식)

  • Paul, Anjan Kumar;Lee, Hyung-Jin;Kim, Young-Bum;Islam, Mohammad Khairul;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.13 no.1
    • /
    • pp.126-133
    • /
    • 2009
  • In this paper, we propose an object recognition technique for implementing marker less augmented reality. Scale Invariant Feature Transform (SIFT) is used for finding the local features from object images. These features are invariant to scale, rotation, translation, and partially invariant to illumination changes. Extracted Features are distinct and have matched with different image features in the scene. If the trained image is properly matched, then it is expected to find object in scene. In this paper, an object is found from a scene by matching the template images that can be generated from the first frame of the scene. Experimental results of object recognition for 4 kinds of objects showed that the proposed technique has a good performance.

  • PDF

Rotation-Invariant Texture Classification Using Gabor Wavelet (Gabor 웨이블릿을 이용한 회전 변화에 무관한 질감 분류 기법)

  • Kim, Won-Hee;Yin, Qingbo;Moon, Kwang-Seok;Kim, Jong-Nam
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.9
    • /
    • pp.1125-1134
    • /
    • 2007
  • In this paper, we propose a new approach for rotation invariant texture classification based on Gabor wavelet. Conventional methods have the low correct classification rate in large texture database. In our proposed method, we define two feature groups which are the global feature vector and the local feature matrix. The feature groups are output of Gabor wavelet filtering. By using the feature groups, we defined an improved discriminant and obtained high classification rates of large texture database in the experiments. From spectrum symmetry of texture images, the number of test times were reduced nearly 50%. Consequently, the correct classification rate is improved with $2.3%{\sim}15.6%$ values in 112 Brodatz texture class, which may vary according to comparison methods.

  • PDF

A Lightweight Real-Time Small IR Target Detection Algorithm to Reduce Scale-Invariant Computational Overhead (스케일 불변적인 연산량 감소를 위한 경량 실시간 소형 적외선 표적 검출 알고리즘)

  • Ban, Jong-Hee;Yoo, Joonhyuk
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.4
    • /
    • pp.231-238
    • /
    • 2017
  • Detecting small infrared targets from the low-SCR images at a long distance is very hard. The previous Local Contrast Method (LCM) algorithm based on the human visual system shows a superior performance of detecting small targets by a background suppression technique through local contrast measure. However, its slow processing speed due to the heavy multi-scale processing overhead is not suitable to a variety of real-time applications. This paper presents a lightweight real-time small target detection algorithm, called by the Improved Selective Local Contrast Method (ISLCM), to reduce the scale-invariant computational overhead. The proposed ISLCM applies the improved local contrast measure to the predicted selective region so that it may have a comparable detection performance as the previous LCM while guaranteeing low scale-invariant computational load by exploiting both adaptive scale estimation and small target feature feasibility. Experimental results show that the proposed algorithm can reduce its computational overhead considerably while maintaining its detection performance compared with the previous LCM.

Affine Local Descriptors for Viewpoint Invariant Face Recognition

  • Gao, Yongbin;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.781-784
    • /
    • 2014
  • Face recognition under controlled settings, such as limited viewpoint and illumination change, can achieve good performance nowadays. However, real world application for face recognition is still challenging. In this paper, we use Affine SIFT to detect affine invariant local descriptors for face recognition under large viewpoint change. Affine SIFT is an extension of SIFT algorithm. SIFT algorithm is scale and rotation invariant, which is powerful for small viewpoint changes in face recognition, but it fails when large viewpoint change exists. In our scheme, Affine SIFT is used for both gallery face and probe face, which generates a series of different viewpoints using affine transformation. Therefore, Affine SIFT allows viewpoint difference between gallery face and probe face. Experiment results show our framework achieves better recognition accuracy than SIFT algorithm on FERET database.

Robust Features and Accurate Inliers Detection Framework: Application to Stereo Ego-motion Estimation

  • MIN, Haigen;ZHAO, Xiangmo;XU, Zhigang;ZHANG, Licheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.302-320
    • /
    • 2017
  • In this paper, an innovative robust feature detection and matching strategy for visual odometry based on stereo image sequence is proposed. First, a sparse multiscale 2D local invariant feature detection and description algorithm AKAZE is adopted to extract the interest points. A robust feature matching strategy is introduced to match AKAZE descriptors. In order to remove the outliers which are mismatched features or on dynamic objects, an improved random sample consensus outlier rejection scheme is presented. Thus the proposed method can be applied to dynamic environment. Then, geometric constraints are incorporated into the motion estimation without time-consuming 3-dimensional scene reconstruction. Last, an iterated sigma point Kalman Filter is adopted to refine the motion results. The presented ego-motion scheme is applied to benchmark datasets and compared with state-of-the-art approaches with data captured on campus in a considerably cluttered environment, where the superiorities are proved.

Automatic Registration between EO and IR Images of KOMPSAT-3A Using Block-based Image Matching

  • Kang, Hyungseok
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.4
    • /
    • pp.545-555
    • /
    • 2020
  • This paper focuses on automatic image registration between EO (Electro-Optical) and IR (InfraRed) satellite images with different spectral properties using block-based approach and simple preprocessing technique to enhance the performance of feature matching. If unpreprocessed EO and IR images from Kompsat-3A satellite were applied to local feature matching algorithms(Scale Invariant Feature Transform, Speed-Up Robust Feature, etc.), image registration algorithm generally failed because of few detected feature points or mismatched pairs despite of many detected feature points. In this paper, we proposed a new image registration method which improved the performance of feature matching with block-based registration process on 9-divided image and pre-processing technique based on adaptive histogram equalization. The proposed method showed better performance than without our proposed technique on visual inspection and I-RMSE. This study can be used for automatic image registration between various images acquired from different sensors.

Automatic Registration of High Resolution Satellite Images using Local Properties of Tie Points (지역적 매칭쌍 특성에 기반한 고해상도영상의 자동기하보정)

  • Han, You-Kyung;Byun, Young-Gi;Choi, Jae-Wan;Han, Dong-Yeob;Kim, -Yong-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.3
    • /
    • pp.353-359
    • /
    • 2010
  • In this paper, we propose the automatic image-to-image registration of high resolution satellite images using local properties of tie points to improve the registration accuracy. A spatial distance between interest points of reference and sensed images extracted by Scale Invariant Feature Transform(SIFT) is additionally used to extract tie points. Coefficients of affine transform between images are extracted by invariant descriptor based matching, and interest points of sensed image are transformed to the reference coordinate system using these coefficients. The spatial distance between interest points of sensed image which have been transformed to the reference coordinates and interest points of reference image is calculated for secondary matching. The piecewise linear function is applied to the matched tie points for automatic registration of high resolution images. The proposed method can extract spatially well-distributed tie points compared with SIFT based method.

Object Detection and Classification Using Extended Descriptors for Video Surveillance Applications (비디오 감시 응용에서 확장된 기술자를 이용한 물체 검출과 분류)

  • Islam, Mohammad Khairul;Jahan, Farah;Min, Jae-Hong;Baek, Joong-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.12-20
    • /
    • 2011
  • In this paper, we propose an efficient object detection and classification algorithm for video surveillance applications. Previous researches mainly concentrated either on object detection or classification using particular type of feature e.g., Scale Invariant Feature Transform (SIFT) or Speeded Up Robust Feature (SURF) etc. In this paper we propose an algorithm that mutually performs object detection and classification. We combinedly use heterogeneous types of features such as texture and color distribution from local patches to increase object detection and classification rates. We perform object detection using spatial clustering on interest points, and use Bag of Words model and Naive Bayes classifier respectively for image representation and classification. Experimental results show that our combined feature is better than the individual local descriptor in object classification rate.

A Rotation Invariant Image Retrieval with Local Features

  • You, Hee-Jun;Shin, Dae-Kyu;Kim, Dong-Hoon;Kim, Hyun-Sool;Park, Sang-Hui
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.3
    • /
    • pp.332-338
    • /
    • 2003
  • Content-based image retrieval is the research of images from database, that are visually similar to given image examples. Gabor functions and Gabor filters are regarded as excellent methods for feature extraction and texture segmentation. However, they have a disadvantage not to perform well in case of a rotated image because of its direction-oriented filter. This paper proposes a method of extracting local texture features from blocks with central interest points detected in an image and a rotation invariant Gabor wavelet filter. We also propose a method of comparing pattern histograms of features classified by VQ (Vector Quantization) among images.