• 제목/요약/키워드: affine invariant feature

검색결과 24건 처리시간 0.027초

A New Shape Adaptation Scheme to Affine Invariant Detector

  • Liu, Congxin;Yang, Jie;Zhou, Yue;Feng, Deying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제4권6호
    • /
    • pp.1253-1272
    • /
    • 2010
  • In this paper, we propose a new affine shape adaptation scheme for the affine invariant feature detector, in which the convergence stability is still an opening problem. This paper examines the relation between the integration scale matrix of next iteration and the current second moment matrix and finds that the convergence stability of the method can be improved by adjusting the relation between the two matrices instead of keeping them always proportional as proposed by previous methods. By estimating and updating the shape of the integration kernel and differentiation kernel in each iteration based on the anisotropy of the current second moment matrix, we propose a coarse-to-fine affine shape adaptation scheme which is able to adjust the pace of convergence and enable the process to converge smoothly. The feature matching experiments demonstrate that the proposed approach obtains an improvement in convergence ratio and repeatability compared with the current schemes with relatively fixed integration kernel.

Viewpoint Unconstrained Face Recognition Based on Affine Local Descriptors and Probabilistic Similarity

  • Gao, Yongbin;Lee, Hyo Jong
    • Journal of Information Processing Systems
    • /
    • 제11권4호
    • /
    • pp.643-654
    • /
    • 2015
  • Face recognition under controlled settings, such as limited viewpoint and illumination change, can achieve good performance nowadays. However, real world application for face recognition is still challenging. In this paper, we propose using the combination of Affine Scale Invariant Feature Transform (SIFT) and Probabilistic Similarity for face recognition under a large viewpoint change. Affine SIFT is an extension of SIFT algorithm to detect affine invariant local descriptors. Affine SIFT generates a series of different viewpoints using affine transformation. In this way, it allows for a viewpoint difference between the gallery face and probe face. However, the human face is not planar as it contains significant 3D depth. Affine SIFT does not work well for significant change in pose. To complement this, we combined it with probabilistic similarity, which gets the log likelihood between the probe and gallery face based on sum of squared difference (SSD) distribution in an offline learning process. Our experiment results show that our framework achieves impressive better recognition accuracy than other algorithms compared on the FERET database.

An Algorithm for a pose estimation of a robot using Scale-Invariant feature Transform

  • 이재광;허욱열;김학일
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.517-519
    • /
    • 2004
  • This paper describes an approach to estimate a robot pose with an image. The algorithm of pose estimation with an image can be broken down into three stages : extracting scale-invariant features, matching these features and calculating affine invariant. In the first step, the robot mounted mono camera captures environment image. Then feature extraction is executed in a captured image. These extracted features are recorded in a database. In the matching stage, a Random Sample Consensus(RANSAC) method is employed to match these features. After matching these features, the robot pose is estimated with positions of features by calculating affine invariant. This algorithm is implemented and demonstrated by Matlab program.

  • PDF

Affine Local Descriptors for Viewpoint Invariant Face Recognition

  • Gao, Yongbin;Lee, Hyo Jong
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2014년도 춘계학술발표대회
    • /
    • pp.781-784
    • /
    • 2014
  • Face recognition under controlled settings, such as limited viewpoint and illumination change, can achieve good performance nowadays. However, real world application for face recognition is still challenging. In this paper, we use Affine SIFT to detect affine invariant local descriptors for face recognition under large viewpoint change. Affine SIFT is an extension of SIFT algorithm. SIFT algorithm is scale and rotation invariant, which is powerful for small viewpoint changes in face recognition, but it fails when large viewpoint change exists. In our scheme, Affine SIFT is used for both gallery face and probe face, which generates a series of different viewpoints using affine transformation. Therefore, Affine SIFT allows viewpoint difference between gallery face and probe face. Experiment results show our framework achieves better recognition accuracy than SIFT algorithm on FERET database.

MEGH: A New Affine Invariant Descriptor

  • Dong, Xiaojie;Liu, Erqi;Yang, Jie;Wu, Qiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권7호
    • /
    • pp.1690-1704
    • /
    • 2013
  • An affine invariant descriptor is proposed, which is able to well represent the affine covariant regions. Estimating main orientation is still problematic in many existing method, such as SIFT (scale invariant feature transform) and SURF (speeded up robust features). Instead of aligning the estimated main orientation, in this paper ellipse orientation is directly used. According to ellipse orientation, affine covariant regions are firstly divided into 4 sub-regions with equal angles. Since affine covariant regions are divided from the ellipse orientation, the divided sub-regions are rotation invariant regardless the rotation, if any, of ellipse. Meanwhile, the affine covariant regions are normalized into a circular region. In the end, the gradients of pixels in the circular region are calculated and the partition-based descriptor is created by using the gradients. Compared with the existing descriptors including MROGH, SIFT, GLOH, PCA-SIFT and spin images, the proposed descriptor demonstrates superior performance according to extensive experiments.

Improvement of ASIFT for Object Matching Based on Optimized Random Sampling

  • Phan, Dung;Kim, Soo Hyung;Na, In Seop
    • International Journal of Contents
    • /
    • 제9권2호
    • /
    • pp.1-7
    • /
    • 2013
  • This paper proposes an efficient matching algorithm based on ASIFT (Affine Scale-Invariant Feature Transform) which is fully invariant to affine transformation. In our approach, we proposed a method of reducing similar measure matching cost and the number of outliers. First, we combined the Manhattan and Chessboard metrics replacing the Euclidean metric by a linear combination for measuring the similarity of keypoints. These two metrics are simple but really efficient. Using our method the computation time for matching step was saved and also the number of correct matches was increased. By applying an Optimized Random Sampling Algorithm (ORSA), we can remove most of the outlier matches to make the result meaningful. This method was experimented on various combinations of affine transform. The experimental result shows that our method is superior to SIFT and ASIFT.

얼굴인식을 위한 어파인 불변 지역 서술자 (Affine Invariant Local Descriptors for Face Recognition)

  • 고용빈;이효종
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제3권9호
    • /
    • pp.375-380
    • /
    • 2014
  • 오늘날 촬영 상황을 조절할 수 있는 환경, 즉 고정된 촬영각이나 일관된 조도 조건에서는 얼굴인식 기술 수준은 신뢰할 수 있을 정도로 높다. 그러나 복잡한 현실에서의 얼굴 인식은 여전히 어려운 과제이다. SIFT 알고리즘은 촬영각의 변화가 미미할 때에 한하여, 크기와 회전 변화에 무관하게 우수한 성능을 보여주고 있다. 본 논문에서는 다양하게 촬영각이 변하는 환경에서도 얼굴 인식을 할 수 있는 어파인 불변 지역 서술자를 탐지하는 ASIFT(Affine SIFT)라는 알고리즘을 적용하였다. SIFT 알고리즘을 확장하여 만든 ASIFT 알고리즘은 촬영각 변화에 취약한 단점을 극복하였다. 제안하는 방법에서 ASIFT 알고리즘은 표본 이미지에, SIFT 알고리즘은 검증 이미지에 적용하였다. ASIFT 방법은 어파인 변환을 사용하여 다양한 시각에 따른 영상을 생성할 수 있기 때문에 ASIFT 알고리즘은 저장 영상과 실험 영상의 시각 차이에 따른 문제를 해결할 수 있었다. 실험결과 FERET 데이터를 사용했을 때 제안한 방법은 촬영각의 변화가 큰 경우에 기존의 시프트 알고리즘보다도 높은 인식률을 보여주었다.

Interest Point Detection Using Hough Transform and Invariant Patch Feature for Image Retrieval

  • ;안영은;박종안
    • 한국ITS학회 논문지
    • /
    • 제8권1호
    • /
    • pp.127-135
    • /
    • 2009
  • This paper presents a new technique for corner shape based object retrieval from a database. The proposed feature matrix consists of values obtained through a neighborhood operation of detected corners. This results in a significant small size feature matrix compared to the algorithms using color features and thus is computationally very efficient. The corners have been extracted by finding the intersections of the detected lines found using Hough transform. As the affine transformations preserve the co-linearity of points on a line and their intersection properties, the resulting corner features for image retrieval are robust to affine transformations. Furthermore, the corner features are invariant to noise. It is considered that the proposed algorithm will produce good results in combination with other algorithms in a way of incremental verification for similarity.

  • PDF

SIFT 와 SURF 알고리즘의 성능적 비교 분석 (Comparative Analysis of the Performance of SIFT and SURF)

  • 이용환;박제호;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제12권3호
    • /
    • pp.59-64
    • /
    • 2013
  • Accurate and robust image registration is important task in many applications such as image retrieval and computer vision. To perform the image registration, essential required steps are needed in the process: feature detection, extraction, matching, and reconstruction of image. In the process of these function, feature extraction not only plays a key role, but also have a big effect on its performance. There are two representative algorithms for extracting image features, which are scale invariant feature transform (SIFT) and speeded up robust feature (SURF). In this paper, we present and evaluate two methods, focusing on comparative analysis of the performance. Experiments for accurate and robust feature detection are shown on various environments such like scale changes, rotation and affine transformation. Experimental trials revealed that SURF algorithm exhibited a significant result in both extracting feature points and matching time, compared to SIFT method.

지역적 매칭쌍 특성에 기반한 고해상도영상의 자동기하보정 (Automatic Registration of High Resolution Satellite Images using Local Properties of Tie Points)

  • 한유경;번영기;최재완;한동엽;김용일
    • 한국측량학회지
    • /
    • 제28권3호
    • /
    • pp.353-359
    • /
    • 2010
  • 본 논문은 Scale Invariant Feature Transform(SIFT) 기술자를 이용한 매칭 방법을 개선하여 고해상도영상에서 보다 많은 매칭쌍(tie points)을 추출함으로써 고해상도영상 자동기하보정의 결과향상을 목적으로 한다. 이를 위해 기준(reference)영상과 대상(sensed)영상의 특징점(interest points)간의 위치관계를 추가적으로 이용하여 매칭쌍을 추출하였다. SIFT 기술자를 이용하여 어핀(affine)변환계수를 추정한 후, 이를 통해 대상영상의 특징점 좌표를 기준영상 좌표체계로 변환하였다. 변환된 대상영상의 특징점과 기준영상의 특징점간의 공간거리(spatial distance)정보를 이용하여 최종적으로 매칭쌍을 추출하였다. 추출된 매칭쌍으로 piecewise linear function을 구성하여 고해상도 영상간 자동기하보정을 수행하였다. 제안한 기법을 통하여, 기존 SIFT 기법에 의해 추출한 결과에 비해 영상 전역에 걸쳐 고르게 분포된 다수의 매칭쌍을 추출할 수 있었다.