• Title/Summary/Keyword: Face Region

Search Result 824, Processing Time 0.023 seconds

Detection of Face Direction by Using Inter-Frame Difference

  • Jang, Bongseog;Bae, Sang-Hyun
    • Journal of Integrative Natural Science
    • /
    • v.9 no.2
    • /
    • pp.155-160
    • /
    • 2016
  • Applying image processing techniques to education, the face of the learner is photographed, and expression and movement are detected from video, and the system which estimates degree of concentration of the learner is developed. For one learner, the measuring system is designed in terms of estimating a degree of concentration from direction of line of learner's sight and condition of the eye. In case of multiple learners, it must need to measure each concentration level of all learners in the classroom. But it is inefficient because one camera per each learner is required. In this paper, position in the face region is estimated from video which photographs the learner in the class by the difference between frames within the motion direction. And the system which detects the face direction by the face part detection by template matching is proposed. From the result of the difference between frames in the first image of the video, frontal face detection by Viola-Jones method is performed. Also the direction of the motion which arose in the face region is estimated with the migration length and the face region is tracked. Then the face parts are detected to tracking. Finally, the direction of the face is estimated from the result of face tracking and face parts detection.

Simply Separation of Head and Face Region and Extraction of Facial Features for Image Security (영상보안을 위한 머리와 얼굴의 간단한 영역 분리 및 얼굴 특징 추출)

  • Jeon, Young-Cheol;Lee, Keon-Ik;Kim, Kang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.5
    • /
    • pp.125-133
    • /
    • 2008
  • As society develops, the importance of safety for individuals and facilities in public places is getting higher. Not only the areas such as the existing parking lot, bank and factory which require security or crime prevention but also individual houses as well as general institutions have the trend to increase investment in guard and security. This study suggests face feature extract and the method to simply divide face region and head region that are import for face recognition by using color transform. First of all, it is to divide face region by using color transform of Y image of YIQ image and head image after dividing head region with K image among CMYK image about input image. Then, it is to extract features of face by using labeling after Log calculation to head image. The clearly divided head and face region can easily classify the shape of head and face and simply find features. When the algorism of the suggested method is utilized, it is expected that security related facilities that require importance can use it effectively to guard or recognize people.

  • PDF

Automatic Camera Pose Determination from a Single Face Image

  • Wei, Li;Lee, Eung-Joo;Ok, Soo-Yol;Bae, Sung-Ho;Lee, Suk-Hwan;Choo, Young-Yeol;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1566-1576
    • /
    • 2007
  • Camera pose information from 2D face image is very important for making virtual 3D face model synchronize with the real face. It is also very important for any other uses such as: human computer interface, 3D object estimation, automatic camera control etc. In this paper, we have presented a camera position determination algorithm from a single 2D face image using the relationship between mouth position information and face region boundary information. Our algorithm first corrects the color bias by a lighting compensation algorithm, then we nonlinearly transformed the image into $YC_bC_r$ color space and use the visible chrominance feature of face in this color space to detect human face region. And then for face candidate, use the nearly reversed relationship information between $C_b\;and\;C_r$ cluster of face feature to detect mouth position. And then we use the geometrical relationship between mouth position information and face region boundary information to determine rotation angles in both x-axis and y-axis of camera position and use the relationship between face region size information and Camera-Face distance information to determine the camera-face distance. Experimental results demonstrate the validity of our algorithm and the correct determination rate is accredited for applying it into practice.

  • PDF

Face Region Tracking Improvement and Hardware Implementation for AF(Auto Focusing) Using Face to ROI (얼굴을 관심 영역으로 사용하는 자동 초점을 위한 얼굴 영역 추적 향상 방법 및 하드웨어 구현)

  • Jeong, Hyo-Won;Ha, Joo-Young;Han, Hag-Yong;Yang, Hoon-Gee;Kang, Bong-Soon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.1
    • /
    • pp.89-96
    • /
    • 2010
  • In this paper, we proposed a method about improving face tracking efficiency of face detection for AF system using the faces to the ROI. The conventional face detection system detecting faces based skin color uses the ratio of skin pixels of the present frame to detected face regions of the past frame to track the faces. The tracking method is superior in the stability of the regions but it is inferior in the face tracking efficiency. We proposed a face tracking method using the area of the overlapping region in the detected face regions of the past frame and the present frame to improve the tracking efficiency. The proposed face tracking efficiency demonstration was performed by making a film of face detection with face tracking in real-time and using the moving traces of the detected faces.

Design and Implementation of Eye-Gaze Estimation Algorithm based on Extraction of Eye Contour and Pupil Region (눈 윤곽선과 눈동자 영역 추출 기반 시선 추정 알고리즘의 설계 및 구현)

  • Yum, Hyosub;Hong, Min;Choi, Yoo-Joo
    • The Journal of Korean Association of Computer Education
    • /
    • v.17 no.2
    • /
    • pp.107-113
    • /
    • 2014
  • In this study, we design and implement an eye-gaze estimation system based on the extraction of eye contour and pupil region. In order to effectively extract the contour of the eye and region of pupil, the face candidate regions were extracted first. For the detection of face, YCbCr value range for normal Asian face color was defined by the pre-study of the Asian face images. The biggest skin color region was defined as a face candidate region and the eye regions were extracted by applying the contour and color feature analysis method to the upper 50% region of the face candidate region. The detected eye region was divided into three segments and the pupil pixels in each pupil segment were counted. The eye-gaze was determined into one of three directions, that is, left, center, and right, by the number of pupil pixels in three segments. In the experiments using 5,616 images of 20 test subjects, the eye-gaze was estimated with about 91 percent accuracy.

  • PDF

Real-time Face Detection Method using SVM Classifier (SW 분류기를 이용한 실시간 얼굴 검출 방법)

  • 지형근;이경희;반성범
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.529-532
    • /
    • 2003
  • In this paper, we describe new method to detect face in real-time. We use color information, edge information, and binary information to detect candidate regions of eyes from input image, and then extract face region using the detected eye pall. We verify both eye candidate regions and face region using Support Vector Machines(SVM). It is possible to perform fast and reliable face detection because we can protect false detection through these verification processes. From the experimental results, we confirmed the proposed algorithm shows very excellent face detection performance.

  • PDF

Human-Computer Interaction System for the disabled using Recognition of Face Direction (얼굴 주시방향 인식을 이용한 장애자용 의사 전달 시스템)

  • 정상현;문인혁
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.175-178
    • /
    • 2001
  • This paper proposes a novel human-computer interaction system for the disabled using recognition of face direction. Face direction is recognized by comparing positions of center of gravity between face region and facial features such as eyes and eyebrows. The face region is first selected by using color information, and then the facial features are extracted by applying a separation filter to the face region. The process speed for recognition of face direction is 6.57frame/sec with a success rate of 92.9% without any special hardware for image processing. We implement human-computer interaction system using screen menu, and show a validity of the proposed method from experimental results.

  • PDF

Realtime Face Recognition by Analysis of Feature Information (특징정보 분석을 통한 실시간 얼굴인식)

  • Chung, Jae-Mo;Bae, Hyun;Kim, Sung-Shin
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.12a
    • /
    • pp.299-302
    • /
    • 2001
  • The statistical analysis of the feature extraction and the neural networks are proposed to recognize a human face. In the preprocessing step, the normalized skin color map with Gaussian functions is employed to extract the region of face candidate. The feature information in the region of the face candidate is used to detect the face region. In the recognition step, as a tested, the 120 images of 10 persons are trained by the backpropagation algorithm. The images of each person are obtained from the various direction, pose, and facial expression. Input variables of the neural networks are the geometrical feature information and the feature information that comes from the eigenface spaces. The simulation results of$.$10 persons show that the proposed method yields high recognition rates.

  • PDF

Realtime Face Recognition by Analysis of Feature Information (특징정보 분석을 통한 실시간 얼굴인식)

  • Chung, Jae-Mo;Bae, Hyun;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.9
    • /
    • pp.822-826
    • /
    • 2001
  • The statistical analysis of the feature extraction and the neural networks are proposed to recognize a human face. In the preprocessing step, the normalized skin color map with Gaussian functions is employed to extract the region of face candidate. The feature information in the region of the face candidate is used to detect the face region. In the recognition step, as a tested, the 120 images of 10 persons are trained by the backpropagation algorithm. The images of each person are obtained from the various direction, pose, and facial expression. Input variables of the neural networks are the geometrical feature information and the feature information that comes from the eigenface spaces. The simulation results of 10 persons show that the proposed method yields high recognition rates.

  • PDF

A study of face detection using color component (색상요소를 고려한 얼굴검출에 대한 연구)

  • 이정하;강진석;최연성;김장형
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.11a
    • /
    • pp.240-243
    • /
    • 2002
  • In this paper, we propose a face region detection based on skin-color distribution and facial feature extraction algorithm in color still images. To extract face region, we transform color using general skin-color distribution. Facial features are extracted by edge transformation. This detection process reduces calculation time by a scale-down scanning from segmented region. we can detect face region in various facial Expression, skin-color deference and tilted face images.

  • PDF