• Title/Summary/Keyword: 얼굴주요위치

Search Result 31, Processing Time 0.025 seconds

Face Recognition Robust to Brightness, Contrast, Scale, Rotation and Translation (밝기, 명암도, 크기, 회전, 위치 변화에 강인한 얼굴 인식)

  • 이형지;정재호
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.149-156
    • /
    • 2003
  • This paper proposes a face recognition method based on modified Otsu binarization, Hu moment and linear discriminant analysis (LDA). Proposed method is robust to brightness, contrast, scale, rotation, and translation changes. Modified Otsu binarization can make binary images that have the invariant characteristic in brightness and contrast changes. From edge and multi-level binary images obtained by the threshold method, we compute the 17 dimensional Hu moment and then extract feature vector using LDA algorithm. Especially, our face recognition system is robust to scale, rotation, and translation changes because of using Hu moment. Experimental results showed that our method had almost a superior performance compared with the conventional well-known principal component analysis (PCA) and the method combined PCA and LDA in the perspective of brightness, contrast, scale, rotation, and translation changes with Olivetti Research Laboratory (ORL) database and the AR database.

Robust Face Feature Extraction for various Pose and Expression (자세와 표정변화에 강인한 얼굴 특징 검출)

  • Jung Jae-Yoon;Jung Jin-Kwon;Cho Sung-Won;Kim Jae-Min
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.11a
    • /
    • pp.294-298
    • /
    • 2005
  • 바이오메트릭스의 여러 가지 기술 중에서 얼굴인식은 지문인식, 손금인식, 홍채인식 등과는 달리 신체의 일부를 접촉시키지 않고도 원거리에 설치된 카메라를 통해 사람을 확인할 수 있는 장점을 가지고 있다. 그러나 얼굴인식은 조명변화, 표정변화 둥의 다양한 환경변화에 대단히 민감하게 반응하므로 얼굴의 특징 영역에 대한 정확한 추출이 반드시 선행되어야 한다. 얼굴의 주요 특징인 눈, 코, 입, 눈썹은 자세와 표정 그리고 생김새에 따라 다양한 위치, 크기, 형태를 가질 수 있다. 본 연구에서는 변화하는 특징 영역과 특징 점을 정확히 추출하기 위하여 얼굴을 9가지 방향으로 분류하고, 각 분류된 방향에서 특징 영역을 통계적인 형태에 따라 다시 2차로 분류하여, 각각의 형태에 대한 표준 템플릿을 생성하여 검출하는 방법을 제안한다.

  • PDF

A Study on The Extraction of the Region and The Recognition of The State of Eyes (눈영역 추출과 개폐상태 인식에 관한 연구)

  • 김도형;이학만;박재현;차의영
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04b
    • /
    • pp.532-534
    • /
    • 2001
  • 본 논문에서는 다양한 배경을 가지는 얼굴 영상에서 눈의 위치를 추출하고 누의 개폐 상태를 인식하는 방법에 대하여 제시한다. 얼굴 요소 중에서 눈은 얼굴 인식 분야에 있어서 주요한 특징을 나타내는 주 요소이며, 눈의 개폐 상태 인식은 인간의 물리적, 생체적 신호 감지 및 표정인식에도 유용하게 사용될 수 있다. 본 논문에서는 후부영역을 강조하기 위한 전처리 과정을 수행하고 템플릿 매칭 방법을 사용하여 후부 영역을 추출한다. 추출된 1차 후부 영역들은 설정된 병합식을 사용하여 병합되며, 기하학적 사전지식과 Matching Value를 기반으로 최종 눈후보 영역을 추출한다. 검출된 눈 후보 영역은 검출영역 전처리와 특징점 산출 과정을 거쳐 최종적으로 개폐 판별식을 통해 눈의 개폐상태를 인식하게 된다. 제안한 방법은 눈위치 추출과 개폐인식에서 모두 높은 인식률을 보였으며 향후 운전자의 졸음인식 및 환자 감시장치 등 여러 응용에서 사용될 수 있다.

  • PDF

Face Tracking for Multi-view Display System (다시점 영상 시스템을 위한 얼굴 추적)

  • Han, Chung-Shin;Jang, Se-Hoon;Bae, Jin-Woo;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.2C
    • /
    • pp.16-24
    • /
    • 2005
  • In this paper, we proposed a face tracking algorithm for a viewpoint adaptive multi-view synthesis system. The original scene captured by a depth camera contains a texture image and 8 bit gray-scale depth map. From this original image, multi-view images can be synthesized which correspond to viewer's position by using geometrical transformation such as a rotation and a translation. The proposed face tracking technique gives a motion parallax cue by different viewpoints and view angles. In the proposed algorithm, tracking of viewer's dominant face initially established from camera by using statistical characteristics of face colors and deformable templates is done. As a result, we can provide motion parallax cue by detecting viewer's dominant face area and tracking it even under a heterogeneous background and can successfully display the synthesized sequences.

Synthesizing Faces of Animation Characters Using a 3D Model (3차원 모델을 사용한 애니메이션 캐릭터 얼굴의 합성)

  • Jang, Seok-Woo;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.8
    • /
    • pp.31-40
    • /
    • 2012
  • In this paper, we propose a method of synthesizing faces of a user and an animation character using a 3D face model. The suggested method first receives two orthogonal 2D face images and extracts major features of the face through the template snake. It then generates a user-customized 3D face model by adjusting a generalized face model using the extracted facial features and by mapping texture maps obtained from two input images to the 3D face model. Finally, it generates a user-customized animation character by synthesizing the generated 3D model to an animation character reflecting the position, size, facial expressions, and rotational information of the character. Experimental results show some results to verify the performance of the suggested algorithm. We expect that our method will be useful to various applications such as games and animation movies.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Drone position control using face recognition (얼굴인식을 이용한 드론의 위치제어 구현)

  • Kwon, Gi-Hwan;Zzao, Chao-Ran;Gwon, Ji-Seung;Kim, Su-Yeon;Jung, Soon-Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.05a
    • /
    • pp.84-86
    • /
    • 2020
  • 드론을 활용한 산업이 많은 관심을 받고 있다. 군집비행 연구는 산업 분야, 군사 분야에서 주요작업 성공확률을 높일 수 있다. 본 논문에서는 전파 음영지역에서의 드론의 군집비행 제어를 위해 얼굴인식을 바탕으로 위치제어를 수행한다. 이러한 기능의 구현을 통해 드론의 효과적인 군집비행이 가능할 것이며 정밀한 제어가 요구되는 분야에서 이용 가능할 것으로 기대된다. 향후 추가적인 제어방식으로 개선할 것이다.

Facial Features Detection for Facial Caricaturing System (캐리커처 실성 시스템을 위한 얼굴 특징 추출 연구)

  • Lee, Ok-Kyoung;Park, Yeun-Chool;Oh, Hae-Seok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2000.10b
    • /
    • pp.1329-1332
    • /
    • 2000
  • 캐리커처 생성 시스템은 입력된 인물 사진을 세그먼테이션을 통하여 특징(이목구비)을 추출하고, 추출된 특징정보를 이용하여 기와 유사한 특징정보를 가지는 캐리커처 이미지를 검색하여 매핑시키는 시스템이다. 캐리커처 생성 시스템에 얼굴 특징정보 추출은 색상과 모양에 대한 정보를 이용한다. 본 논문은 캐리커처생성을 위한 인물 사진을 세그멘테이션 처리하여 부분 영역 특징정보를 추출하는데 그 목적이 있다. 이때 사용하는 이목구비의 특징정보를 위해 수직, 수평의 히스토그램이 주요하게 사용된다. 또한 인물 사진에서 위치정보를 이용하여 얼굴내의 이목구비를 확인하고, 추출하므로 정확한 정보를 이용할 수 있다.

  • PDF

An Integrated Face Detection and Recognition System (통합된 시스템에서의 얼굴검출과 인식기법)

  • 박동희;이규봉;이유홍;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.05a
    • /
    • pp.165-170
    • /
    • 2003
  • This paper presents an integrated approach to unconstrained face recognition in arbitrary scenes. The front end of the system comprises of a scale and pose tolerant face detector. Scale normalization is achieved through novel combination of a skin color segmentation and log-polar mapping procedure. Principal component analysis is used with the multi-view approach proposed in[10] to handle the pose variations. For a given color input image, the detector encloses a face in a complex scene within a circular boundary and indicates the position of the nose. Next, for recognition, a radial grid mapping centered on the nose yields a feature vector within the circular boundary. As the width of the color segmented region provides an estimated size for the face, the extracted feature vector is scale normalized by the estimated size. The feature vector is input to a trained neural network classifier for face identification. The system was evaluated using a database of 20 person's faces with varying scale and pose obtained on different complex backgrounds. The performance of the face recognizer was also quite good except for sensitivity to small scale face images. The integrated system achieved average recognition rates of 87% to 92%.

  • PDF

An Integrated Face Detection and Recognition System (통합된 시스템에서의 얼굴검출과 인식기법)

  • 박동희;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.6
    • /
    • pp.1312-1317
    • /
    • 2003
  • This paper presents an integrated approach to unconstrained face recognition in arbitrary scenes. The front end of the system comprises of a scale and pose tolerant face detector. Scale normalization is achieved through novel combination of a skin color segmentation and log-polar mapping procedure. Principal component analysis is used with the multi-view approach proposed in[10] to handle the pose variations. For a given color input image, the detector encloses a face in a complex scene within a circular boundary and indicates the position of the nose. Next, for recognition, a radial grid mapping centered on the nose yields a feature vector within the circular boundary. As the width of the color segmented region provides an estimated size for the face, the extracted feature vector is scale normalized by the estimated size. The feature vector is input to a trained neural network classifier for face identification. The system was evaluated using a database of 20 person's faces with varying scale and pose obtained on different complex backgrounds. The performance of the face recognizer was also quite good except for sensitivity to small scale face images. The integrated system achieved average recognition rates of 87% to 92%.