• 제목/요약/키워드: unimodal biometric

검색결과 2건 처리시간 0.016초

다중 생체인식 기반의 인증기술과 과제 (Technology Review on Multimodal Biometric Authentication)

  • 조병철;박종만
    • 한국통신학회논문지
    • /
    • 제40권1호
    • /
    • pp.132-141
    • /
    • 2015
  • 기존의 단일 생체인증 방법들은 인식과 식별이 주 용도이며 서비스 용도별 실시간 개인 인증보안은 취약하다. 이에 다중 생체인식기반의 실시간 입증 및 인증 기술을 통해 보안성능을 향상시키는 방법의 연구와 개발이 필수적이다. 본 논문은 생체인식기술의 다중 생체인식 파라미터를 조합하여 인증하는 선진 기술 및 특허동향 분석을 통해 국내 기술개발 전략과 과제를 제시하는데 중점을 둔다.

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • 제16권1호
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.