• Title/Summary/Keyword: Rotation-invariant

Search Result 256, Processing Time 0.024 seconds

A Iris Recognition Using Zernike Moment and Wavelet (Zernike 모멘트와 Wavelet을 이용한 홍채인식)

  • Choi, Chang-Soo;Park, Jong-Cheon;Jun, Byoung-Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.11
    • /
    • pp.4568-4575
    • /
    • 2010
  • Iris recognition is a biometric technology that uses iris pattern information, which has features of stability, security etc. Because of this reason, it is especially appropriate under certain circumstances of requiring a high security. Recently, using the iris information has a variety uses in the fields of access control and information security. In extracting the iris feature, it is desirable to extract the feature which is invariant to size, lights, rotation. We have easy solutions to the problem of iris size and lights by previous processing but there is still problem of iris feature extract invariant to rotation. In this paper, To improve an awareness ratio and decline in speed for a revision of rotation, it is proposed that the iris recognition method using Zernike Moment and Daubechies Wavelet. At first step, the proposed method groups rotated iris into similar things by statistical feature of Zernike Moment invariant to a rotation, which shortens processing time of iris recognition and looks equal to an established method in the performance of recognition too. therefore, proposed method could confirm the possibility of effective application for large scale iris recognition system.

Fuzzy Classifier and Bispectrum for Invariant 2-D Shape Recognition (2차원 불변 영상 인식을 위한 퍼지 분류기와 바이스펙트럼)

  • 한수환;우영운
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.3
    • /
    • pp.241-252
    • /
    • 2000
  • In this paper, a translation, rotation and scale invariant system for the recognition of closed 2-D images using the bispectrum of a contour sequence and a weighted fuzzy classifier is derived and compared with the recognition process using one of the competitive neural algorithm, called a LVQ( Loaming Vector Quantization). The bispectrum based on third order cumulants is applied to the contour sequences of an image to extract fifteen feature vectors for each planar image. These bispectral feature vectors, which are invariant to shape translation, rotation and scale transformation, can be used to the represent two-dimensional planar images and are fed into a weighted fuzzy classifier. The experimental processes with eight different shapes of aircraft images are presented to illustrate a relatively high performance of the proposed recognition system.

  • PDF

Direct RTI Fingerprint Identification Based on GCMs and Gabor Features Around Core point

  • Cho, Sang-Hyun;Sung, Hyo-Kyung;Park, Jin-Geun;Park, Heung-Moon
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.446-449
    • /
    • 2000
  • A direct RTI(Rotation and translation invariant) fingerprint identification is proposed using the GCMs(generalized complex moments) and Gabor filter-based features from the grey level fingerprint around core point. The core point is located as reference point for the translation invariant matching. And its symmetry axis is detected for the rotation invariant matching from its neighboring region centered at the core point. And then, fingerprint is divided into non-overlapping blocks with respect to the core point and, in contrast to minutiae-based method using various processing steps, features are directly extracted from the blocked grey level fingerprint using Gabor filter, which provides information contained in a particular orientation in the image. The Proposed fingerprint identification is based on the Euclidean distance of the corresponding Gabor features between the input and the template fingerprint. Experiments are conducted on 300 ${\times}$ 300 fingerprints obtained from the CMOS sensor with 500 dpi resolution, and the proposed method could obtain 97% identification rate.

  • PDF

A Study on the Automatic Inspection System using Invariant Moments Algorithm with the Change of Size and Rotation

  • Lee, Yong-Jung;Lee, Yang-Beom;Jeong, Gi-Hwa
    • Proceedings of the Korean Institute of IIIuminating and Electrical Installation Engineers Conference
    • /
    • 2004.05a
    • /
    • pp.479-485
    • /
    • 2004
  • The purpose of this study is to develop a practical image inspection system that could recognize it correctly, endowing flexibility to the productive field, although the same object for work will be changed in the size and rotated. In this experiment, it selected a fighter, rotating the direction from $30^{\circ}\;to\;45^{\circ}$ simultaneously while changing the size from 1/4 to 1/16, as an object inspection without using another hardware for exclusive image processing. The invariant moments, Hu has suggested, was used as feature vector moment descriptor. As a result of the experiment the image inspection system developed from this research was operated in real-time regardless of the chance of size and rotation for the object inspection, and it maintained the correspondent rates steadily above from 94% to 96%. Accordingly, it is considered as the flexibility can be considerably endowed to the factory automation when the image inspection system developed from this research is applied to the productive field.

  • PDF

A Multimodal Fusion Method Based on a Rotation Invariant Hierarchical Model for Finger-based Recognition

  • Zhong, Zhen;Gao, Wanlin;Wang, Minjuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.1
    • /
    • pp.131-146
    • /
    • 2021
  • Multimodal biometric-based recognition has been an active topic because of its higher convenience in recent years. Due to high user convenience of finger, finger-based personal identification has been widely used in practice. Hence, taking Finger-Print (FP), Finger-Vein (FV) and Finger-Knuckle-Print (FKP) as the ingredients of characteristic, their feature representation were helpful for improving the universality and reliability in identification. To usefully fuse the multimodal finger-features together, a new robust representation algorithm was proposed based on hierarchical model. Firstly, to obtain more robust features, the feature maps were obtained by Gabor magnitude feature coding and then described by Local Binary Pattern (LBP). Secondly, the LGBP-based feature maps were processed hierarchically in bottom-up mode by variable rectangle and circle granules, respectively. Finally, the intension of each granule was represented by Local-invariant Gray Features (LGFs) and called Hierarchical Local-Gabor-based Gray Invariant Features (HLGGIFs). Experiment results revealed that the proposed algorithm is capable of improving rotation variation of finger-pose, and achieving lower Equal Error Rate (EER) in our homemade database.

Image Character Recognition using the Mellin Transform and BPEJTC (Mellin 변환 방식과 BPEJTC를 이용한 영상 문자 인식)

  • 서춘원;고성원;이병선
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.17 no.4
    • /
    • pp.26-35
    • /
    • 2003
  • For the recognizing system to be classified the same or different images in the nature the rotation, scale and transition invariant features is to be necessary. There are many investigations to get the feature for the recognition system and the log-polar transform which is to be get the invariant feature for the scale and rotation is used. In this paper, we suggested the character recognition methods which are used the centroid method and the log-polar transform with the interpolation to get invariant features for the character recognition system and obtained the results of the above 50% differential ratio for the character features. And we obtained the about 90% recognition ratio from the suggested character recognition system using the BPEJTC which is used the invariant feature from the Mellin transform method for the reference image. and can be recognized the scaled and rotated input character. Therefore, we suggested the image character recognition system using the Mellin transform method and the BPEJTC is possible to recognize with the invariant feature for rotation scale and transition.

Comparison of invariant pattern recognition algorithms (불변 패턴인식 알고리즘의 비교연구)

  • 강대성
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.8
    • /
    • pp.30-41
    • /
    • 1996
  • This paper presents a comparative study of four pattern recognition algorithms which are invariant to translations, rotations, and scale changes of the input object; namely, object shape features (OSF), geometrica fourier mellin transform (GFMT), moment invariants (MI), and centered polar exponential transform (CPET). Pattern description is obviously one of the most important aspects of pattern recognition, which is useful to describe the object shape independently of translation, rotation, or size. We first discuss problems that arise in the conventional invariant pattern recognition algorithms, or size. We first discuss problems that arise in the coventional invariant pattern recognition algorithms, then we analyze their performance using the same criterion. Computer simulations with several distorted images show that the CPET algorithm yields better performance than the other ones.

  • PDF

Affine Local Descriptors for Viewpoint Invariant Face Recognition

  • Gao, Yongbin;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.781-784
    • /
    • 2014
  • Face recognition under controlled settings, such as limited viewpoint and illumination change, can achieve good performance nowadays. However, real world application for face recognition is still challenging. In this paper, we use Affine SIFT to detect affine invariant local descriptors for face recognition under large viewpoint change. Affine SIFT is an extension of SIFT algorithm. SIFT algorithm is scale and rotation invariant, which is powerful for small viewpoint changes in face recognition, but it fails when large viewpoint change exists. In our scheme, Affine SIFT is used for both gallery face and probe face, which generates a series of different viewpoints using affine transformation. Therefore, Affine SIFT allows viewpoint difference between gallery face and probe face. Experiment results show our framework achieves better recognition accuracy than SIFT algorithm on FERET database.

Fuzzy Mean Method with Bispectral Features for Robust 2D Shape Classification

  • Woo, Young-Woon;Han, Soo-Whan
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.10a
    • /
    • pp.313-320
    • /
    • 1999
  • In this paper, a translation, rotation and scale invariant system for the classification of closed 2D images using the bispectrum of a contour sequence and the weighted fuzzy mean method is derived and compared with the classification process using one of the competitive neural algorithm, called a LVQ(Learning Vector Quantization). The bispectrun based on third order cumulants is applied to the contour sequences of the images to extract fifteen feature vectors for each planar image. These bispectral feature vectors, which are invariant to shape translation, rotation and scale transformation, can be used to represent two-dimensional planar images and are fed into an classifier using weighted fuzzy mean method. The experimental processes with eight different shapes of aircraft images are presented to illustrate the high performance of the proposed classifier.

  • PDF

A PSRI Feature Extraction and Automatic Target Recognition Using a Cooperative Network and an MLP. (Cooperative network와 MLP를 이용한 PSRI 특징추출 및 자동표적인식)

  • 전준형;김진호;최흥문
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.6
    • /
    • pp.198-207
    • /
    • 1996
  • A PSRI (position, scale, and rotation invariant ) feature extraction and automatic target recognition system using a cooperative network and an MLP is proposed. We can extract position invarient features by obtaining the target center using the projection and the moment in preprocessing stage. The scale and rotation invariant features are extracted from the contour projection of the number of edge pixels on each of the concentric circles, which is input to the cooperative network. By extracting the representative PSRI features form the features and their differentiations using max-net and min-net, we can rdduce the number of input neurons of the MLP, and make the resulted automatic target recognition system less sensitive to input variances. Experiments are conduted on various complex images which are shifted, rotated, or scaled, and the results show that the proposed system is very efficient for PSRI feature extractions and automatic target recognitions.

  • PDF