• Title/Summary/Keyword: Facial Identity

Search Result 40, Processing Time 0.032 seconds

Vision-based Authentication and Registration of Facial Identity in Hospital Information System

  • Bae, Seok-Chan;Lee, Yon-Sik;Choi, Sun-Woong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.12
    • /
    • pp.59-65
    • /
    • 2019
  • Hospital Information System includes a wide range of information in the medical profession, from the overall administrative work of the hospital to the medical work of doctors. In this paper, we proposed a Vision-based Authentication and Registration of Facial Identity in Hospital Information System using OpenCV. By using the proposed security module program a Vision-based Authentication and Registration of Facial Identity, the hospital information system was designed to enhance the security through registration of the face in the hospital personnel and to process the receipt, treatment, and prescription process without any secondary leakage of personal information. The implemented security module program eliminates the need for printing, exposing and recognizing the existing sticker paper tags and wristband type personal information that can be checked by the nurse in the hospital information system. In contrast to the original, the security module program is inputted with ID and password instead to improve privacy and recognition rate.

Face Recognition Using a Facial Recognition System

  • Almurayziq, Tariq S;Alazani, Abdullah
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.280-286
    • /
    • 2022
  • Facial recognition system is a biometric manipulation. Its applicability is simpler, and its work range is broader than fingerprints, iris scans, signatures, etc. The system utilizes two technologies, such as face detection and recognition. This study aims to develop a facial recognition system to recognize person's faces. Facial recognition system can map facial characteristics from photos or videos and compare the information with a given facial database to find a match, which helps identify a face. The proposed system can assist in face recognition. The developed system records several images, processes recorded images, checks for any match in the database, and returns the result. The developed technology can recognize multiple faces in live recordings.

ISFRNet: A Deep Three-stage Identity and Structure Feature Refinement Network for Facial Image Inpainting

  • Yan Wang;Jitae Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.881-895
    • /
    • 2023
  • Modern image inpainting techniques based on deep learning have achieved remarkable performance, and more and more people are working on repairing more complex and larger missing areas, although this is still challenging, especially for facial image inpainting. For a face image with a huge missing area, there are very few valid pixels available; however, people have an ability to imagine the complete picture in their mind according to their subjective will. It is important to simulate this capability while maintaining the identity features of the face as much as possible. To achieve this goal, we propose a three-stage network model, which we refer to as the identity and structure feature refinement network (ISFRNet). ISFRNet is based on 1) a pre-trained pSp-styleGAN model that generates an extremely realistic face image with rich structural features; 2) a shallow structured network with a small receptive field; and 3) a modified U-net with two encoders and a decoder, which has a large receptive field. We choose structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), L1 Loss and learned perceptual image patch similarity (LPIPS) to evaluate our model. When the missing region is 20%-40%, the above four metric scores of our model are 28.12, 0.942, 0.015 and 0.090, respectively. When the lost area is between 40% and 60%, the metric scores are 23.31, 0.840, 0.053 and 0.177, respectively. Our inpainting network not only guarantees excellent face identity feature recovery but also exhibits state-of-the-art performance compared to other multi-stage refinement models.

Conflict Resolution: Analysis of the Existing Theories and Resolution Strategies in Relation to Face Recognition

  • A. A. Alabi;B. S. Afolabi;B. I. Akhigbe;A. A. Ayoade
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.166-176
    • /
    • 2023
  • A scenario known as conflict in face recognition may arise as a result of some disparity-related issues (such as expression, distortion, occlusion and others) leading to a compromise of someone's identity or contradiction of the intended message. However, addressing this requires the determination and application of appropriate procedures among the various conflict theories both in terms of concepts as well as resolution strategies. Theories such as Marxist, Game theory (Prisoner's dilemma, Penny matching, Chicken problem), Lanchester theory and Information theory were analyzed in relation to facial images conflict and these were made possible by trying to provide answers to selected questions as far as resolving facial conflict is concerned. It has been observed that the scenarios presented in the Marxist theory agree with the form of resolution expected in the analysis of conflict and its related issues as they relate to face recognition. The study observed that the issue of conflict in facial images can better be analyzed using the concept introduced by the Marxist theory in relation to the Information theory. This is as a result of its resolution strategy which tends to seek a form of balance as result as opposed to the win or lose case scenarios applied in other concepts. This was also consolidated by making reference to the main mechanisms and result scenario applicable in Information theory.

A Study on Face Component Extraction for Automatic Generation of Personal Avatar (개인아바타 자동 생성을 위한 얼굴 구성요소의 추출에 관한 연구)

  • Choi Jae Young;Hwang Seung Ho;Yang Young Kyu;Whangbo Taeg Ken
    • Journal of Internet Computing and Services
    • /
    • v.6 no.4
    • /
    • pp.93-102
    • /
    • 2005
  • In Recent times, Netizens have frequently use virtual character 'Avatar' schemes in order to present their own identity, there is a strong need for avatars to resemble the user. This paper proposes an extraction technique for facial region and features that are used in generating the avatar automatically. For extraction of facial feature component, the method uses ACM and edge information. Also, in the extraction process of facial region, the proposed method reduces the effect of lights and poor image quality on low resolution pictures. this is achieved by using the variation of facial area size which is employed for external energy of ACM. Our experiments show that the success rate of extracting facial regions is $92{\%}$ and accuracy rate of extracting facial feature components is $83.4{\%}$, our results provide good evidence that the suggested method can extract the facial regions and features accurately, moreover this technique can be used in the process of handling features according to the pattern parts of automatic avatar generation system in the near future.

  • PDF

Rapid Implementation of 3D Facial Reconstruction from a Single Image on an Android Mobile Device

  • Truong, Phuc Huu;Park, Chang-Woo;Lee, Minsik;Choi, Sang-Il;Ji, Sang-Hoon;Jeong, Gu-Min
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.5
    • /
    • pp.1690-1710
    • /
    • 2014
  • In this paper, we propose the rapid implementation of a 3-dimensional (3D) facial reconstruction from a single frontal face image and introduce a design for its application on a mobile device. The proposed system can effectively reconstruct human faces in 3D using an approach robust to lighting conditions, and a fast method based on a Canonical Correlation Analysis (CCA) algorithm to estimate the depth. The reconstruction system is built by first creating 3D facial mapping from a personal identity vector of a face image. This mapping is then applied to real-world images captured with a built-in camera on a mobile device to form the corresponding 3D depth information. Finally, the facial texture from the face image is extracted and added to the reconstruction results. Experiments with an Android phone show that the implementation of this system as an Android application performs well. The advantage of the proposed method is an easy 3D reconstruction of almost all facial images captured in the real world with a fast computation. This has been clearly demonstrated in the Android application, which requires only a short time to reconstruct the 3D depth map.

Local Appearance-based Face Recognition Using SVM and PCA (SVM과 PCA를 이용한 국부 외형 기반 얼굴 인식 방법)

  • Park, Seung-Hwan;Kwak, No-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.54-60
    • /
    • 2010
  • The local appearance-based method is one of the face recognition methods that divides face image into small areas and extracts features from each area of face image using statistical analysis. It collects classification results of each area and decides identity of a face image using a voting scheme by integrating classification results of each area of a face image. The conventional local appearance-based method divides face images into small pieces and uses all the pieces in recognition process. In this paper, we propose a local appearance-based method that makes use of only the relatively important facial components. The proposed method detects the facial components such as eyes, nose and mouth that differs much from person to person. In doing so, the proposed method detects exact locations of facial components using support vector machines (SVM). Based on the detected facial components, a number of small images that contain the facial parts are constructed. Then it extracts features from each facial component image using principal components analysis (PCA). We compared the performance of the proposed method to those of the conventional methods. The results show that the proposed method outperforms the conventional local appearance-based method while preserving the advantages of the conventional local appearance-based method.

Implementation and Utilization of Decentralized Identity-Based Mobile Student ID (분산 ID 기반 모바일 학생증 구현과 활용)

  • Cho, Seung-Hyun;Kang, Min-Jeong;Kang, Ji-Yun;Lee, Ji-Eun;Rhee, Kyung-Hyune
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.6
    • /
    • pp.1115-1126
    • /
    • 2021
  • In this paper, we developed a mobile student ID providing a self sovereignty identity (SSI) which replaces the conventional plastic-type student ID that includes private information of a student such as a name, a student number, a facial photo, etc. The implemented mobile student ID solves the problem of exposing student's identity due to a loss or a theft of a plastic-type student ID, and it has a structure and process of FRANCHISE model which is developed by a concept of a decentralized Identity(DID) of a Blockchain, in which specialized for convenience as an electronic student ID through an application on a smart phone device. In addition, it protects student's privacy by controlling personal information on oneself. By using a smartphone, not only it easily identifies the student but also it expands to several services such as participation in school events, online authentication, and a student's exchange program among colleges.

A Hybrid Nonsmooth Nonnegative Matrix Factorization for face representation (다양한 얼굴 표현을 위한 하이브리드 nsNMF 방법)

  • Lee, Sung-Joo;Park, Kang-Ryoung;Kim, Jai-Hie
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.957-958
    • /
    • 2008
  • The human facial appearances vary globally and locally according to identity, pose, illumination, and expression variations. In this paper, we propose a hybrid-nonsmooth nonnegative matrix factorization (hybrid-nsNMF) based appearance model to represent various facial appearances which vary globally and locally. Instead of using single smooth matrix in nsNMF, we used two different smooth matrixes and combine them to extract global and local basis at the same time.

  • PDF

Action Unit Based Facial Features for Subject-independent Facial Expression Recognition (인물에 독립적인 표정인식을 위한 Action Unit 기반 얼굴특징에 관한 연구)

  • Lee, Seung Ho;Kim, Hyung-Il;Park, Sung Yeong;Ro, Yong Man
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.04a
    • /
    • pp.881-883
    • /
    • 2015
  • 실제적인 표정인식 응용에서는 테스트 시 등장하는 인물이 트레이닝 데이터에 존재하지 않는 경우가 빈번하여 성능 저하가 발생한다. 본 논문에서는 인물에 독립적인(subject-independent) 표정인식을 위한 얼굴특징을 제안한다. 제안방법은 인물에 공통적인 얼굴 근육 움직임(Action Unit(AU))에 기반한 기하학 정보를 표정 특징으로 사용한다. 따라서 인물의 고유 아이덴티티(identity)의 영향은 감소되고 표정과 관련된 정보는 강조된다. 인물에 독립적인 표정인식 실험결과, 86%의 높은 표정인식률과 테스트 비디오 시퀀스 당 3.5ms(Matlab 기준)의 매우 빠른 분류속도를 달성하였다.