• Title/Summary/Keyword: rotation-invariant

Search Result 256, Processing Time 0.022 seconds

Image Similarity Retrieval using an Scale and Rotation Invariant Region Feature (크기 및 회전 불변 영역 특징을 이용한 이미지 유사성 검색)

  • Yu, Seung-Hoon;Kim, Hyun-Soo;Lee, Seok-Lyong;Lim, Myung-Kwan;Kim, Deok-Hwan
    • Journal of KIISE:Databases
    • /
    • v.36 no.6
    • /
    • pp.446-454
    • /
    • 2009
  • Among various region detector and shape feature extraction method, MSER(Maximally Stable Extremal Region) and SIFT and its variant methods are popularly used in computer vision application. However, since SIFT is sensitive to the illumination change and MSER is sensitive to the scale change, it is not easy to apply the image similarity retrieval. In this paper, we present a Scale and Rotation Invariant Region Feature(SRIRF) descriptor using scale pyramid, MSER and affine normalization. The proposed SRIRF method is robust to scale, rotation, illumination change of image since it uses the affine normalization and the scale pyramid. We have tested the SRIRF method on various images. Experimental results demonstrate that the retrieval performance of the SRIRF method is about 20%, 38%, 11%, 24% better than those of traditional SIFT, PCA-SIFT, CE-SIFT and SURF, respectively.

Rotation-Invariant Iris Recognition Method Based on Zernike Moments (Zernike 모멘트 기반의 회전 불변 홍채 인식)

  • Choi, Chang-Soo;Seo, Jeong-Man;Jun, Byoung-Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.2
    • /
    • pp.31-40
    • /
    • 2012
  • Iris recognition is a biometric technology which can identify a person using the iris pattern. It is important for the iris recognition system to extract the feature which is invariant to changes in iris patterns. Those changes can be occurred by the influence of lights, changes in the size of the pupil, and head tilting. In this paper, we propose a novel method based on Zernike Moment which is robust to rotations of iris patterns. we utilized a selection of Zernike moments for the fast and effective recognition by selecting global optimum moments and local optimum moments for optimal matching of each iris class. The proposed method enables high-speed feature extraction and feature comparison because it requires no additional processing to obtain the rotation invariance, and shows comparable performance to the well-known previous methods.

Rotation Invariant Face Detection Using HOG and Polar Coordinate Transform

  • Jang, Kyung-Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.11
    • /
    • pp.85-92
    • /
    • 2021
  • In this paper, a method for effectively detecting rotated face and rotation angle regardless of the rotation angle is proposed. Rotated face detection is a challenging task, due to the large variation in facial appearance. In the proposed polar coordinate transformation, the spatial information of the facial components is maintained regardless of the rotation angle, so there is no variation in facial appearance due to rotation. Accordingly, features such as HOG, which are used for frontal face detection without rotation but have rotation-sensitive characteristics, can be effectively used in detecting rotated face. Only the training data in the frontal face is needed. The HOG feature obtained from the polar coordinate transformed images is learned using SVM and rotated faces are detected. Experiments on 3600 rotated face images show a rotation angle detection rate of 97.94%. Furthermore, the positions and rotation angles of the rotated faces are accurately detected from images with a background including multiple rotated faces.

Extended SURF Algorithm with Color Invariant Feature and Global Feature (컬러 불변 특징과 광역 특징을 갖는 확장 SURF(Speeded Up Robust Features) 알고리즘)

  • Yoon, Hyun-Sup;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.6
    • /
    • pp.58-67
    • /
    • 2009
  • A correspondence matching is one of the important tasks in computer vision, and it is not easy to find corresponding points in variable environment where a scale, rotation, view point and illumination are changed. A SURF(Speeded Up Robust Features) algorithm have been widely used to solve the problem of the correspondence matching because it is faster than SIFT(Scale Invariant Feature Transform) with closely maintaining the matching performance. However, because SURF considers only gray image and local geometric information, it is difficult to match corresponding points on the image where similar local patterns are scattered. In order to solve this problem, this paper proposes an extended SURF algorithm that uses the invariant color and global geometric information. The proposed algorithm can improves the matching performance since the color information and global geometric information is used to discriminate similar patterns. In this paper, the superiority of the proposed algorithm is proved by experiments that it is compared with conventional methods on the image where an illumination and a view point are changed and similar patterns exist.

Wavelet Transform Technology for Translation-invariant Iris Recognition (위치 이동에 무관한 홍채 인식을 위한 웨이블렛 변환 기술)

  • Lim, Cheol-Su
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.459-464
    • /
    • 2003
  • This paper proposes the use of a wavelet based image transform algorithm in human iris recognition method and the effectiveness of this technique will be determined in preprocessing of extracting Iris image from the user´s eye obtained by imaging device such as CCD Camera or due to torsional rotation of the eye, and it also resolves the problem caused by invariant under translations and dilations due to tilt of the head. This technique values through the proposed translation-invariant wavelet transform algorithm rather than the conventional wavelet transform method. Therefore we extracted the best-matching iris feature values and compared the stored feature codes with the incoming data to identify the user. As result of our experimentation, this technique demonstrate the significant advantage over verification when it compares with other general types of wavelet algorithm in the measure of FAR & FRR.

A Performance Analysis of the SIFT Matching on Simulated Geospatial Image Differences (공간 영상 처리를 위한 SIFT 매칭 기법의 성능 분석)

  • Oh, Jae-Hong;Lee, Hyo-Seong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.5
    • /
    • pp.449-457
    • /
    • 2011
  • As automated image processing techniques have been required in multi-temporal/multi-sensor geospatial image applications, use of automated but highly invariant image matching technique has been a critical ingredient. Note that there is high possibility of geometric and spectral differences between multi-temporal/multi-sensor geospatial images due to differences in sensor, acquisition geometry, season, and weather, etc. Among many image matching techniques, the SIFT (Scale Invariant Feature Transform) is a popular method since it has been recognized to be very robust to diverse imaging conditions. Therefore, the SIFT has high potential for the geospatial image processing. This paper presents a performance test results of the SIFT on geospatial imagery by simulating various image differences such as shear, scale, rotation, intensity, noise, and spectral differences. Since a geospatial image application often requires a number of good matching points over the images, the number of matching points was analyzed with its matching positional accuracy. The test results show that the SIFT is highly invariant but could not overcome significant image differences. In addition, it guarantees no outlier-free matching such that it is highly recommended to use outlier removal techniques such as RANSAC (RANdom SAmple Consensus).

Deformation Invariant Optical Correlator Using Photorefractive Medium (광굴절 매질을 이용한 공간계 불변 광상관기에 관한 연구)

  • Kim, Ran-Sook;Ihm, Jong-Tae;Son, Hyon;Park, Han-Kyu
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.6
    • /
    • pp.97-101
    • /
    • 1989
  • Scale and rotation invariant polar-logarithmic coordinate transformation is used to achieve deformation invariant pattern recognition. The coordinate transformation is produce by a computer generated hologram (CGH). The mask fabricated by a photo (UV light) pattern generator for the 1nr-$theta$ coordinate transformation is made of the CGH whose transmission function is derived by the use of Lee's method. The optically produced coordinate transformed function is derived by the use of Lee's method. The optically produced coordinate transmission input pattern is interfaced on real-time holography. Variations of autocorrelation for scaled and rotated input patterns are suggested experimentally using implemented optical correlator.

  • PDF

MPEG-7 Homogeneous Texture Descriptor

  • Ro, Yong-Man;Kim, Mun-Churl;Kang, Ho-Kyung;Manjunath, B.S.;Kim, Jin-Woong
    • ETRI Journal
    • /
    • v.23 no.2
    • /
    • pp.41-51
    • /
    • 2001
  • MPEG-7 standardization work has started with the aims of providing fundamental tools for describing multimedia contents. MPEG-7 defines the syntax and semantics of descriptors and description schemes so that they may be used as fundamental tools for multimedia content description. In this paper, we introduce a texture based image description and retrieval method, which is adopted as the homogeneous texture descriptor in the visual part of the MPEG-7 final committee draft. The current MPEG-7 homogeneous texture descriptor consists of the mean, the standard deviation value of an image, energy, and energy deviation values of Fourier transform of the image. These are extracted from partitioned frequency channels based on the human visual system (HVS). For reliable extraction of the texture descriptor, Radon transformation is employed. This is suitable for HVS behavior. We also introduce various matching methods; for example, intensity-invariant, rotation-invariant and/or scale-invariant matching. This technique retrieves relevant texture images when the user gives a querying texture image. In order to show the promising performance of the texture descriptor, we take the experimental results with the MPEG-7 test sets. Experimental results show that the MPEG-7 texture descriptor gives an efficient and effective retrieval rate. Furthermore, it gives fast feature extraction time for constructing the texture descriptor.

  • PDF

Identification System Based on Partial Face Feature Extraction (부분 얼굴 특징 추출에 기반한 신원 확인 시스템)

  • Choi, Sun-Hyung;Cho, Seong-Won;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.2
    • /
    • pp.168-173
    • /
    • 2012
  • This paper presents a new human identification algorithm using partial features of the uncovered portion of face when a person wears a mask. After the face area is detected, the feature is extracted from the eye area above the mask. The identification process is performed by comparing the acquired one with the registered features. For extracting features SIFT(scale invariant feature transform) algorithm is used. The extracted features are independent of brightness and size- and rotation-invariant for the image. The experiment results show the effectiveness of the suggested algorithm.

Model-based 3-D object recognition using hopfield neural network (Hopfield 신경회로망을 이용한 모델 기반형 3차원 물체 인식)

  • 정우상;송호근;김태은;최종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.5
    • /
    • pp.60-72
    • /
    • 1996
  • In this paper, a enw model-base three-dimensional (3-D) object recognition mehtod using hopfield network is proposed. To minimize deformation of feature values on 3-D rotation, we select 3-D shape features and 3-D relational features which have rotational invariant characteristics. Then these feature values are normalized to have scale invariant characteristics, also. The input features are matched with model features by optimization process of hopjfield network in the form of two dimensional arrayed neurons. Experimental results on object classification and object matching with the 3-D rotated, scale changed, an dpartial oculued objects show good performance of proposed method.

  • PDF