• Title/Summary/Keyword: multimodal biometrics

Search Result 21, Processing Time 0.026 seconds

Multimodal Face Biometrics by Using Convolutional Neural Networks

  • Tiong, Leslie Ching Ow;Kim, Seong Tae;Ro, Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.170-178
    • /
    • 2017
  • Biometric recognition is one of the major challenging topics which needs high performance of recognition accuracy. Most of existing methods rely on a single source of biometric to achieve recognition. The recognition accuracy in biometrics is affected by the variability of effects, including illumination and appearance variations. In this paper, we propose a new multimodal biometrics recognition using convolutional neural network. We focus on multimodal biometrics from face and periocular regions. Through experiments, we have demonstrated that facial multimodal biometrics features deep learning framework is helpful for achieving high recognition performance.

Development of Real-Time Verification System by Features Extraction of Multimodal Biometrics Using Hybrid Method (조합기법을 이용한 다중생체신호의 특징추출에 의한 실시간 인증시스템 개발)

  • Cho, Yong-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.9 no.4
    • /
    • pp.263-268
    • /
    • 2006
  • This paper presents a real-time verification system by extracting a features of multimodal biometrics using hybrid method, which is combined the moment balance and the independent component analysis(ICA). The moment balance is applied to reduce the computation loads by extracting the validity signal due to exclude the needless backgrounds of multimodal biometrics. ICA is also applied to increase the verification performance by removing the overlapping signals due to extract the statistically independent basis of signals. Multimodal biometrics are used both the faces and the fingerprints which are acquired by Web camera and acquisition device, respectively. The proposed system has been applied to the fusion problems of 48 faces and 48 fingerprints(24 persons * 2 scenes) of 320*240 pixels, respectively. The experimental results show that the proposed system has a superior verification performances(speed, rate).

  • PDF

Enhancement of Authentication Performance based on Multimodal Biometrics for Android Platform (안드로이드 환경의 다중생체인식 기술을 응용한 인증 성능 개선 연구)

  • Choi, Sungpil;Jeong, Kanghun;Moon, Hyeonjoon
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.3
    • /
    • pp.302-308
    • /
    • 2013
  • In this research, we have explored personal authentication system through multimodal biometrics for mobile computing environment. We have selected face and speaker recognition for the implementation of multimodal biometrics system. For face recognition part, we detect the face with Modified Census Transform (MCT). Detected face is pre-processed through eye detection module based on k-means algorithm. Then we recognize the face with Principal Component Analysis (PCA) algorithm. For speaker recognition part, we extract features using the end-point of voice and the Mel Frequency Cepstral Coefficient (MFCC). Then we verify the speaker through Dynamic Time Warping (DTW) algorithm. Our proposed multimodal biometrics system shows improved verification rate through combining two different biometrics described above. We implement our proposed system based on Android environment using Galaxy S hoppin. Proposed system presents reduced false acceptance ratio (FAR) of 1.8% which shows improvement from single biometrics system using the face and the voice (presents 4.6% and 6.7% respectively).

Multimodal biometrics system using PDA under ubiquitous environments (유비쿼터스 환경에서 PDA를 이용한 다중생체인식 시스템 구현)

  • Kwon Man-Jun;Yang Dong-Hwa;Kim Yong-Sam;Lee Dae-Jong;Chun Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.4
    • /
    • pp.430-435
    • /
    • 2006
  • In this paper, we propose a method based on multimodal biometrics system using the face and signature under ubiquitous computing environments. First, the face and signature images are obtained by PDA and then these images with user ID and name are transmitted via WLAN(Wireless LAN) to the server and finally the PDA receives verification result from the server. The multimodal biometrics recognition system consists of two parts. In client part located in PDA, user interface program executes the user registration and verification process. The server consisting of the PCA and LDA algorithm shows excellent face recognition performance and the signature recognition method based on the Kernel PCA and LDA algorithm for signature image projected to vertical and horizontal axes by grid partition method. The proposed algorithm is evaluated with several face and signature images and shows better recognition and verification results than previous unimodal biometrics recognition techniques.

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

A study of using quality for Radial Basis Function based score-level fusion in multimodal biometrics (RBF 기반 유사도 단계 융합 다중 생체 인식에서의 품질 활용 방안 연구)

  • Choi, Hyun-Soek;Shin, Mi-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.192-200
    • /
    • 2008
  • Multimodal biometrics is a method for personal authentication and verification using more than two types of biometrics data. RBF based score-level fusion uses pattern recognition algorithm for multimodal biometrics, seeking the optimal decision boundary to classify score feature vectors each of which consists of matching scores obtained from several unimodal biometrics system for each sample. In this case, all matching scores are assumed to have the same reliability. However, in recent research it is reported that the quality of input sample affects the result of biometrics. Currently the matching scores having low reliability caused by low quality of samples are not currently considered for pattern recognition modelling in multimodal biometrics. To solve this problem, in this paper, we proposed the RBF based score-level fusion approach which employs quality information of input biometrics data to adjust decision boundary. As a result the proposed method with Qualify information showed better recognition performance than both the unimodal biometrics and the usual RBF based score-level fusion without using quality information.

Authentication Performance Optimization for Smart-phone based Multimodal Biometrics (스마트폰 환경의 인증 성능 최적화를 위한 다중 생체인식 융합 기법 연구)

  • Moon, Hyeon-Joon;Lee, Min-Hyung;Jeong, Kang-Hun
    • Journal of Digital Convergence
    • /
    • v.13 no.6
    • /
    • pp.151-156
    • /
    • 2015
  • In this paper, we have proposed personal multimodal biometric authentication system based on face detection, recognition and speaker verification for smart-phone environment. Proposed system detect the face with Modified Census Transform algorithm then find the eye position in the face by using gabor filter and k-means algorithm. Perform preprocessing on the detected face and eye position, then we recognize with Linear Discriminant Analysis algorithm. Afterward in speaker verification process, we extract the feature from the end point of the speech data and Mel Frequency Cepstral Coefficient. We verified the speaker through Dynamic Time Warping algorithm because the speech feature changes in real-time. The proposed multimodal biometric system is to fuse the face and speech feature (to optimize the internal operation by integer representation) for smart-phone based real-time face detection, recognition and speaker verification. As mentioned the multimodal biometric system could form the reliable system by estimating the reasonable performance.

Person Authentication using Multi-Modal Biometrics (다중생체인식을 이용한 사용자 인증)

  • 이경희;최우용;지형근;반성범;정용화
    • Proceedings of the Korea Institutes of Information Security and Cryptology Conference
    • /
    • 2003.07a
    • /
    • pp.204-207
    • /
    • 2003
  • 생체인식 기술은 전통적인 비밀번호 방식 또는 토큰 방식보다 신뢰성 면에서 더 선호되지만, 환경의 영향에 매우 민감하여 성능의 한계가 있다. 이러한 단일 생체인식 기술의 한계를 극복하기 위하여 여러 종류의 생체 정보를 결합한 다중 생체인식 (multimodal biometrics)에 관한 다양한 연구가 진행되고 있다 본 논문에서는 다중 생체인식 기술을 간략히 소개하고, Support Vector Machines(SVM)을 이용하여 얼굴 및 음성 정보를 함께 이용한 다중 생체인식 실험으로 성능이 개선될 수 있음을 확인하였다.

  • PDF

Fusion algorithm for Integrated Face and Gait Identification (얼굴과 발걸음을 결합한 인식)

  • Nizami, Imran Fareed;Hong, Sug-Jun;Lee, Hee-Sung;Ann, Toh-Kar;Kim, Eun-Tai;Park, Mig-Non
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.15-18
    • /
    • 2007
  • Identification of humans from multiple view points is an important task for surveillance and security purposes. For optimal performance the system should use the maximum information available from sensors. Multimodal biometric systems are capable of utilizing more than one physiological or behavioral characteristic for enrollment, verification, or identification. Since gait alone is not yet established as a very distinctive feature, this paper presents an approach to fuse face and gait for identification. In this paper we will use the single camera case i.e. both the face and gait recognition is done using the same set of images captured by a single camera. The aim of this paper is to improve the performance of the system by utilizing the maximum amount of information available in the images. Fusion is considered at decision level. The proposed algorithm is tested on the NLPR database.

  • PDF

A Quality Assessment Method of Biometrics for Estimating Authentication Result in User Authentication System (사용자 인증시스템의 인증결과 예측을 위한 바이오정보의 품질평가기법)

  • Kim, Ae-Young;Lee, Sang-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.2
    • /
    • pp.242-246
    • /
    • 2010
  • In this paper, we propose a quality assessment method of biometrics for estimating an authentication result in an user authentication system. The proposed quality assessment method is designed to compute a quality score called CIMR (Confidence Interval Matching Ratio) as a result by small-sample analysis like T-test. We use the C/MR-based quality assessment method for testing how to well draw a distinction between various biometrics in a multimodal biometric system. We also test a predictability for authentication results of obtained biometrics using the mean $\bar{X}$ and the variance $s^2$ in T-test-based CIMR. As a result, we achieved the maximum 88% accuracy for estimation of user authentication results.