• 제목/요약/키워드: Facial Information

검색결과 1,065건 처리시간 0.029초

Face Recognition Using a Facial Recognition System

  • Almurayziq, Tariq S;Alazani, Abdullah
    • International Journal of Computer Science & Network Security
    • /
    • 제22권9호
    • /
    • pp.280-286
    • /
    • 2022
  • Facial recognition system is a biometric manipulation. Its applicability is simpler, and its work range is broader than fingerprints, iris scans, signatures, etc. The system utilizes two technologies, such as face detection and recognition. This study aims to develop a facial recognition system to recognize person's faces. Facial recognition system can map facial characteristics from photos or videos and compare the information with a given facial database to find a match, which helps identify a face. The proposed system can assist in face recognition. The developed system records several images, processes recorded images, checks for any match in the database, and returns the result. The developed technology can recognize multiple faces in live recordings.

Facial Data Visualization for Improved Deep Learning Based Emotion Recognition

  • Lee, Seung Ho
    • Journal of Information Science Theory and Practice
    • /
    • 제7권2호
    • /
    • pp.32-39
    • /
    • 2019
  • A convolutional neural network (CNN) has been widely used in facial expression recognition (FER) because it can automatically learn discriminative appearance features from an expression image. To make full use of its discriminating capability, this paper suggests a simple but effective method for CNN based FER. Specifically, instead of an original expression image that contains facial appearance only, the expression image with facial geometry visualization is used as input to CNN. In this way, geometric and appearance features could be simultaneously learned, making CNN more discriminative for FER. A simple CNN extension is also presented in this paper, aiming to utilize geometric expression change derived from an expression image sequence. Experimental results on two public datasets (CK+ and MMI) show that CNN using facial geometry visualization clearly outperforms the conventional CNN using facial appearance only.

가상강의에 적용을 위한 얼굴영상정보를 이용한 개인 인증 방법에 관한 연구 (A Study on the Individual Authentication Using Facial Information For Online Lecture)

  • 김동현;권중장
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 추계종합학술대회 논문집(3)
    • /
    • pp.117-120
    • /
    • 2000
  • In this paper, we suggest an authentication system for online lecture using facial information and a face recognition algorithm base on relation of face. First, a facial area on complex background is detected using color information. Second, features are extracted with edge profile. Third, compare it with the value of original facial image in database. By experiments, we know that the proposed system is an useful method for online lecture authentication system.

  • PDF

인간-로봇 상호작용을 위한 자세가 변하는 사용자 얼굴검출 및 얼굴요소 위치추정 (Face and Facial Feature Detection under Pose Variation of User Face for Human-Robot Interaction)

  • 박성기;박민용;이태근
    • 제어로봇시스템학회논문지
    • /
    • 제11권1호
    • /
    • pp.50-57
    • /
    • 2005
  • We present a simple and effective method of face and facial feature detection under pose variation of user face in complex background for the human-robot interaction. Our approach is a flexible method that can be performed in both color and gray facial image and is also feasible for detecting facial features in quasi real-time. Based on the characteristics of the intensity of neighborhood area of facial features, new directional template for facial feature is defined. From applying this template to input facial image, novel edge-like blob map (EBM) with multiple intensity strengths is constructed. Regardless of color information of input image, using this map and conditions for facial characteristics, we show that the locations of face and its features - i.e., two eyes and a mouth-can be successfully estimated. Without the information of facial area boundary, final candidate face region is determined by both obtained locations of facial features and weighted correlation values with standard facial templates. Experimental results from many color images and well-known gray level face database images authorize the usefulness of proposed algorithm.

Hybrid Facial Representations for Emotion Recognition

  • Yun, Woo-Han;Kim, DoHyung;Park, Chankyu;Kim, Jaehong
    • ETRI Journal
    • /
    • 제35권6호
    • /
    • pp.1021-1028
    • /
    • 2013
  • Automatic facial expression recognition is a widely studied problem in computer vision and human-robot interaction. There has been a range of studies for representing facial descriptors for facial expression recognition. Some prominent descriptors were presented in the first facial expression recognition and analysis challenge (FERA2011). In that competition, the Local Gabor Binary Pattern Histogram Sequence descriptor showed the most powerful description capability. In this paper, we introduce hybrid facial representations for facial expression recognition, which have more powerful description capability with lower dimensionality. Our descriptors consist of a block-based descriptor and a pixel-based descriptor. The block-based descriptor represents the micro-orientation and micro-geometric structure information. The pixel-based descriptor represents texture information. We validate our descriptors on two public databases, and the results show that our descriptors perform well with a relatively low dimensionality.

웹 응용을 위한 MPEC-4 얼굴 애니메이션 파라미터 추출 및 구현 (Extraction and Implementation of MPEG-4 Facial Animation Parameter for Web Application)

  • 박경숙;허영남;김응곤
    • 한국정보통신학회논문지
    • /
    • 제6권8호
    • /
    • pp.1310-1318
    • /
    • 2002
  • 본 연구에서는 기존의 방법에 비하여 값비싼 3차원 스캐너나 카메라를 이용하지 않고 정면과 측면 영상을 이용하여 3차원 모델을 생성하는 3차원 얼굴 모델러와 애니메이터를 개발하였다. 이 시스템은 특정한 플랫폼과 소프트웨어에 독립적으로 웹상에서 애니메이션 서버에 접속함으로써 3차원 얼굴 모델을 애니메이션 할 수 있으며 자바 3D API를 이용하여 구현하였다. 얼굴모델러는 입력 영상으로부터 MPEG-4 FDP(Facial Definition Parameter) 특징점을 추출하여 일반 얼굴모델을 특징점에 따라 변형시켜 3차원 얼굴 모델을 생성한다 애니메이터는 FAP(Facial Animation Parameter)에 따라 얼굴모델을 애니메이션하고 렌더링한다. 본 시스템은 웹 상에서 아바타를 제작하는 데 사용될 수 있다.

근육 모델 기반 3D 얼굴 표정 생성 시스템 설계 및 구현 (A Design and Implementation of 3D Facial Expressions Production System based on Muscle Model)

  • 이혜정;정석태
    • 한국정보통신학회논문지
    • /
    • 제16권5호
    • /
    • pp.932-938
    • /
    • 2012
  • 얼굴 표정은 상호간 의사소통에 있어 중요한 의미를 갖는 것으로, 인간이 사용하는 다양한 언어보다도 수많은 인간 내면의 감정을 표현할 수 있는 유일한 수단이다. 본 논문에서는 쉽고 자연스러운 얼굴 표정 생성을 위한 근육 모델 기반 3D 얼굴 표정 생성 시스템을 제안한다. 3D 얼굴 모델의 표정 생성을 위하여 Waters의 근육 모델을 기반으로 자연스러운 얼굴 표정 생성에 필요한 근육을 추가하여 사용하고, 표정 생성의 핵심적 요소인 눈썹, 눈, 코, 입, 볼 등의 특징요소들을 중심으로 얼굴 근육과 근육벡터를 이용하여 해부학적으로 서로 연결된 얼굴 근육 움직임의 그룹화를 통해 얼굴 표정 변화의 기본 단위인 AU를 단순화하고 재구성함으로써 쉽고 자연스러운 얼굴 표정을 생성할 수 있도록 하였다.

A Probabilistic Network for Facial Feature Verification

  • Choi, Kyoung-Ho;Yoo, Jae-Joon;Hwang, Tae-Hyun;Park, Jong-Hyun;Lee, Jong-Hoon
    • ETRI Journal
    • /
    • 제25권2호
    • /
    • pp.140-143
    • /
    • 2003
  • In this paper, we present a probabilistic approach to determining whether extracted facial features from a video sequence are appropriate for creating a 3D face model. In our approach, the distance between two feature points selected from the MPEG-4 facial object is defined as a random variable for each node of a probability network. To avoid generating an unnatural or non-realistic 3D face model, automatically extracted 2D facial features from a video sequence are fed into the proposed probabilistic network before a corresponding 3D face model is built. Simulation results show that the proposed probabilistic network can be used as a quality control agent to verify the correctness of extracted facial features.

  • PDF

Fear and Surprise Facial Recognition Algorithm for Dangerous Situation Recognition

  • Kwak, NaeJoung;Ryu, SungPil;Hwang, IlYoung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제7권2호
    • /
    • pp.51-55
    • /
    • 2015
  • This paper proposes an algorithm for risk situation recognition using facial expression. The proposed method recognitions the surprise and fear expression among human's various emotional expression for recognizing dangerous situation. The proposed method firstly extracts the facial region using Harr-like technique from input, detects eye region and lip region from the extracted face. And then, the method applies Uniform LBP to each region, detects facial expression, and recognizes dangerous situation. The proposed method is evaluated for MUCT database image and web cam input. The proposed method produces good results of facial expression and discriminates dangerous situation well and the average recognition rate is 91.05%.

얼굴 표정인식을 이용한 위험상황 인지 (Facial Expression Algorithm For Risk Situation Recognition)

  • 곽내정;송특섭
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2014년도 추계학술대회
    • /
    • pp.197-200
    • /
    • 2014
  • 본 논문은 얼굴의 표정 인식을 이용한 위험상황 인지 알고리즘을 제안한다. 제안방법은 인간의 다양한 감정 표정 중 위험상황을 인지하기 위한 표정인 놀람과 공포의 표정을 인식한다. 제안방법은 먼저 얼굴 영역을 추출하고 검출된 얼굴 영역으로부터 눈 영역과 입술 영역을 추출한다. 각 영역에 Uniform LBP 방법을 적용하여 표정을 판별하고 위험 상황을 인식한다. 제안방법은 Cohn-Kanade 데이터베이스 영상을 대상으로 성능을 평가하였다. 그 결과 표정 인식에 좋은 결과를 보였으며 이를 이용하여 위험상황을 잘 판별하였다.

  • PDF