• Title/Summary/Keyword: multimodal biometric

Search Result 24, Processing Time 0.025 seconds

Personal Biometric Identification based on ECG Features (ECG 특징추출 기반 개인 바이오 인식)

  • Yoon, Seok-Joo;Kim, Gwang-Jun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.4
    • /
    • pp.521-526
    • /
    • 2015
  • Research on how to use the biological characteristics of human to confirm the identity of the individual is being actively conducted. Electrocardiogram(: ECG) based biometric system is difficult to counterfeit and does not cause skin irritation on the subject. It can be easily combined with conventional biometrics such as fingerprint and face recognition to give multimodal biometric systems. In this thesis, biometric identification method analysing ECG waveform characteristics from Discrete Wavelet Transform(DWT) coefficients is suggested. Feature selection is performed on the 9 coefficients of DWT using the correlation analysis. The verification is achieved by using the error back propagation neural networks. Using the proposed approach on 24 subjects of MIT-BIH QT Database, 98.88% verification rate has been obtained.

Performance Evaluation of Multimodal Biometric System for Normalization Methods and Classifiers (균등화 및 분류기에 따른 다중 생체 인식 시스템의 성능 평가)

  • Go, Hyoun-Ju;Woo, Na-Young;Shin, Yong-Nyuo;Kim, Jae-Sung;Kim, Hak-Il;Chun, Myung-Geun
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.4
    • /
    • pp.377-388
    • /
    • 2007
  • In this paper, we propose a multi-modal biometric system based on face, iris and fingerprint recognition system. To effectively aggregate two systems, we use statistical distribution models based on matching values for genuine and impostor, respectively. And then, We performed reveal fusion algorithms including weighted summation, Support Vector Machine(SVM), Fisher discriminant analysis, Bayesian classifier. From the various experiments, we found that the performance of multi-modal biometric system was influenced with the normalization methods and classifiers.

A Robust Watermarking Algorithm using Wavelet for Biometric Information (웨이블렛을 이용한 생체정보의 강인한 워터마킹 알고리즘)

  • Lee, Wook-Jae;Lee, Dae-Jong;Moon, Ki-Young;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.632-639
    • /
    • 2007
  • This paper presents a wavelet-based watermarking algorithm to securely hide biometric features such as face and fingerprint and effectively extract them with less distortion of the concealed data. To hide the biometric features, we proposed a determination method of insert location based on wavelet transform and adaptive weight method according to the image characteristics. The hidden features are effectively extracted by applying the inverse wavelet transform to the watermarked image. To show the effectiveness, we analyze the various performance such as PSNR and correlation of watermark features before and after applying watermarking. Also, we evaluate the effect of watermaking algorithm with respect to biometric system such as recognition rate. Recognition rate shows 98.67% for multimodal biometric systems consisted of face and fingerprint. From these, we confirm that the proposed method makes it possible to effectively hide and extract the biometric features without lowering recognition rate.

A Multimodal Fusion Method Based on a Rotation Invariant Hierarchical Model for Finger-based Recognition

  • Zhong, Zhen;Gao, Wanlin;Wang, Minjuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.1
    • /
    • pp.131-146
    • /
    • 2021
  • Multimodal biometric-based recognition has been an active topic because of its higher convenience in recent years. Due to high user convenience of finger, finger-based personal identification has been widely used in practice. Hence, taking Finger-Print (FP), Finger-Vein (FV) and Finger-Knuckle-Print (FKP) as the ingredients of characteristic, their feature representation were helpful for improving the universality and reliability in identification. To usefully fuse the multimodal finger-features together, a new robust representation algorithm was proposed based on hierarchical model. Firstly, to obtain more robust features, the feature maps were obtained by Gabor magnitude feature coding and then described by Local Binary Pattern (LBP). Secondly, the LGBP-based feature maps were processed hierarchically in bottom-up mode by variable rectangle and circle granules, respectively. Finally, the intension of each granule was represented by Local-invariant Gray Features (LGFs) and called Hierarchical Local-Gabor-based Gray Invariant Features (HLGGIFs). Experiment results revealed that the proposed algorithm is capable of improving rotation variation of finger-pose, and achieving lower Equal Error Rate (EER) in our homemade database.

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

A study on the implementation of identification system using facial multi-modal (얼굴의 다중특징을 이용한 인증 시스템 구현)

  • 정택준;문용선
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.5
    • /
    • pp.777-782
    • /
    • 2002
  • This study will offer multimodal recognition instead of an existing monomodal bioinfomatics by using facial multi-feature to improve the accuracy of recognition and to consider the convenience of user . Each bioinfomatics vector can be found by the following ways. For a face, the feature is calculated by principal component analysis with wavelet multiresolution. For a lip, a filter is used to find out an equation to calculate the edges of the lips first. Then by using a thinning image and least square method, an equation factor can be drawn. A feature found out the facial parameter distance ratio. We've sorted backpropagation neural network and experimented with the inputs used above. Based on the experimental results we discuss the advantage and efficiency.

Using Keystroke Dynamics for Implicit Authentication on Smartphone

  • Do, Son;Hoang, Thang;Luong, Chuyen;Choi, Seungchan;Lee, Dokyeong;Bang, Kihyun;Choi, Deokjai
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.8
    • /
    • pp.968-976
    • /
    • 2014
  • Authentication methods on smartphone are demanded to be implicit to users with minimum users' interaction. Existing authentication methods (e.g. PINs, passwords, visual patterns, etc.) are not effectively considering remembrance and privacy issues. Behavioral biometrics such as keystroke dynamics and gait biometrics can be acquired easily and implicitly by using integrated sensors on smartphone. We propose a biometric model involving keystroke dynamics for implicit authentication on smartphone. We first design a feature extraction method for keystroke dynamics. And then, we build a fusion model of keystroke dynamics and gait to improve the authentication performance of single behavioral biometric on smartphone. We operate the fusion at both feature extraction level and matching score level. Experiment using linear Support Vector Machines (SVM) classifier reveals that the best results are achieved with score fusion: a recognition rate approximately 97.86% under identification mode and an error rate approximately 1.11% under authentication mode.

A Study on Biometric Model for Information Security (정보보안을 위한 생체 인식 모델에 관한 연구)

  • Jun-Yeong Kim;Se-Hoon Jung;Chun-Bo Sim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.317-326
    • /
    • 2024
  • Biometric recognition is a technology that determines whether a person is identified by extracting information on a person's biometric and behavioral characteristics with a specific device. Cyber threats such as forgery, duplication, and hacking of biometric characteristics are increasing in the field of biometrics. In response, the security system is strengthened and complex, and it is becoming difficult for individuals to use. To this end, multiple biometric models are being studied. Existing studies have suggested feature fusion methods, but comparisons between feature fusion methods are insufficient. Therefore, in this paper, we compared and evaluated the fusion method of multiple biometric models using fingerprint, face, and iris images. VGG-16, ResNet-50, EfficientNet-B1, EfficientNet-B4, EfficientNet-B7, and Inception-v3 were used for feature extraction, and the fusion methods of 'Sensor-Level', 'Feature-Level', 'Score-Level', and 'Rank-Level' were compared and evaluated for feature fusion. As a result of the comparative evaluation, the EfficientNet-B7 model showed 98.51% accuracy and high stability in the 'Feature-Level' fusion method. However, because the EfficietnNet-B7 model is large in size, model lightweight studies are needed for biocharacteristic fusion.

A Multiple Signature Authentication System Based on BioAPI for WWW (웹상의 BioAPI에 기반한 서명 다중 인증 시스템)

  • Yun Sung Keun;Kim Seong Hoon;Jun Byung Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.9
    • /
    • pp.1226-1232
    • /
    • 2004
  • Biometric authentication is rising technology for the security market of the next generation. But most of biometric systems are developed using only one of various biological features. Recently, there is a vigorous research for the standardization of various biometric systems. In this paper, we propose a web-based authentication system using three other verifiers based on functional, parametric, and structural approaches for one biometrics of handwritten signature, which is conformable to a specification of BioAPI introduced by BioAPI Consortium for a standardization of biometric technology. This system is developed with a client-server structure, and clients and servers consist of three layers according to the BioAPI structure. The proposed neb-based multiple authentication system of one biometrics can be used to highly increase confidence degree of authentication without additional several biological measurements, although rejection rate is a little increased. That is, the false accept rate(FAR) decreases on the scale of about 1:40,000, although false reject rate(FRR) increases about 2.7 times in the case of combining above three signature verifiers. So the proposed approach can be used as an effective identification method on the internet of an open network. Also, it can be easily extended to a security system using multimodal biometrics.

Secure Hiding of Multimodal Biometric Information Using Watermarking Method (워터마킹 기법을 이용한 다중생체정보의 안전한 은닉)

  • Lee, Uk-Jae;Lee, Dae-Jong;Jeon, Myeong-Geun
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.103-106
    • /
    • 2007
  • 본 논문에서는 얼굴, 홍채 등의 생채정보를 안전하게 은닉하고 효과적으로 은닉정보를 추출할 수 있는 웨이블렛 기반 워터마킹 기법을 제안한다. 얼굴과 홍채의 특징데이터는 Fuzzy-LDA(Fuzzy-Based Linear Discriminant Analysis)를 이용하여 추출하였다. 워터마킹알고리즘은 Wavelet을 이용하여 생체이미지에 생체특징 삽입 이전의 생체 인식율과 워터마킹알고리즘을 거쳐 생체특징을 추출한 후의 인식률 비교를 통해 성능을 평가하였다. 또한 단일생체특징 삽입과 다중생체특징삽입을 통해 단일생체보안과 다중생체보안의 실험을 수행, 평가하였다.

  • PDF