• Title/Summary/Keyword: Face Detecting

Search Result 194, Processing Time 0.025 seconds

Face Region Detection Algorithm using Euclidean Distance of Color-Image (칼라 영상에서 유클리디안 거리를 이용한 얼굴영역 검출 알고리즘)

  • Jung, Haing-sup;Lee, Joo-shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.79-86
    • /
    • 2009
  • This study proposed a method of detecting the facial area by calculating Euclidian distances among skin color elements and extracting the characteristics of the face. The proposed algorithm is composed of light calibration and face detection. The light calibration process performs calibration for the change of light. The face detection process extracts the area of skin color by calculating Euclidian distances to the input images using as characteristic vectors color and chroma in 20 skin color sample images. From the extracted facial area candidate, the eyes were detected in space C of color model CMY, and the mouth was detected in space Q of color model YIQ. From the extracted facial area candidate, the facial area was detected based on the knowledge of an ordinary face. When an experiment was conducted with 40 color images of face as input images, the method showed a face detection rate of 100%.

  • PDF

Rotated Face Detection Using Polar Coordinate Transform and AdaBoost (극좌표계 변환과 AdaBoost를 이용한 회전 얼굴 검출)

  • Jang, Kyung-Shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.7
    • /
    • pp.896-902
    • /
    • 2021
  • Rotated face detection is required in many applications but still remains as a challenging task, due to the large variations of face appearances. In this paper, a polar coordinate transform that is not affected by rotation is proposed. In addition, a method for effectively detecting rotated faces using the transformed image has been proposed. The proposed polar coordinate transform maintains spatial information between facial components such as eyes, mouth, etc., since the positions of facial components are always maintained regardless of rotation angle, thereby eliminating rotation effects. Polar coordinate transformed images are trained using AdaBoost, which is used for frontal face detection, and rotated faces are detected. We validate the detected faces using LBP that trained the non-face images. Experiments on 3600 face images obtained by rotating images in the BioID database show a rotating face detection rate of 96.17%. Furthermore, we accurately detected rotated faces in images with a background containing multiple rotated faces.

Automatic Face and Eyes Detection: A Scale and Rotation Invariant Approach based on Log-Polar Mapping (Log-Polar 사상의 크기와 회전 불변 특성을 이용한 얼굴과 눈 검출)

  • Choi, Il;Chien, Sung-Il
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.8
    • /
    • pp.88-100
    • /
    • 1999
  • Detecting human face and facial landmarks automatically in an image is as essential step to a fully automatic face recognition system. In this paper, we present a new approach to detect automatically face and its eyes of input image with scale and rotation variations of faces by using an intensity based template matching with a single log-polar face template. In a template-based matching it is necessary to normalize the scale changes and rotations of an input image to a template ones. The log-polar mapping which simulates space-variant human visual system converts scale changes and rotations of input image into constant horizontal and cyclic vertical shifts in the output plane. Intelligent use of this property allows us to shift of the candidate log-polar faces mapped at various fixation points of an input image to be matched to a template over the log-polar plane. Thus, the proposed method eliminates the need of adapting multitemplate and multiresolution schemes, which inevitably give rise to intensive computation involved to cope with scale and rotation variations of faces. Through this scale and rotation involved to cope with scale and method can lead to detecting face and its eyes simultaneously. Experimental results on a database of 795 images show over 98% detection rate.

  • PDF

Detecting Faces on Still Images using Sub-block Processing (서브블록 프로세싱을 이용한 정지영상에서의 얼굴 검출 기법)

  • Yoo Chae-Gon
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.417-420
    • /
    • 2006
  • Detection of faces on still color images with arbitrary backgrounds is attempted in this paper. The newly proposed method is invariant to arbitrary background, number of faces, scale, orientation, skin color, and illumination through the steps of color clustering, cluster scanning, sub-block processing, face area detection, and face verification. The sub-block method makes the proposed method invariant to the size and the number of faces in the image. The proposed method does not need any pre-training steps or a preliminary face database. The proposed method may be applied to areas such as security control, video and photo indexing, and other automatic computer vision-related fields.

Facial-feature Detection in Color Images using Chrominance Components and Mean-Gray Morphology Operation (색도정보와 Mean-Gray 모폴로지 연산을 이용한 컬러영상에서의 얼굴특징점 검출)

  • 강영도;양창우;김장형
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.714-720
    • /
    • 2004
  • In detecting human faces in color images, additional geometric computation is often necessary for validating the face-candidate regions having various forms. In this paper, we propose a method that detects the facial features using chrominance components of color which do not affected by face occlusion and orientation. The proposed algorithm uses the property that the Cb and Cr components have consistent differences around the facial features, especially eye-area. We designed the Mean-Gray Morphology operator to emphasize the feature areas in the eye-map image which generated by basic chrominance differences. Experimental results show that this method can detect the facial features under various face candidate regions effectively.

Robot vision system for face tracking using color information from video images (로봇의 시각시스템을 위한 동영상에서 칼라정보를 이용한 얼굴 추적)

  • Jung, Haing-Sup;Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.4
    • /
    • pp.553-561
    • /
    • 2010
  • This paper proposed the face tracking method which can be effectively applied to the robot's vision system. The proposed algorithm tracks the facial areas after detecting the area of video motion. Movement detection of video images is done by using median filter and erosion and dilation operation as a method for removing noise, after getting the different images using two continual frames. To extract the skin color from the moving area, the color information of sample images is used. The skin color region and the background area are separated by evaluating the similarity by generating membership functions by using MIN-MAX values as fuzzy data. For the face candidate region, the eyes are detected from C channel of color space CMY, and the mouth from Q channel of color space YIQ. The face region is tracked seeking the features of the eyes and the mouth detected from knowledge-base. Experiment includes 1,500 frames of the video images from 10 subjects, 150 frames per subject. The result shows 95.7% of detection rate (the motion areas of 1,435 frames are detected) and 97.6% of good face tracking result (1,401 faces are tracked).

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

Algorithm of Face Region Detection in the TV Color Background Image (TV컬러 배경영상에서 얼굴영역 검출 알고리즘)

  • Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.4
    • /
    • pp.672-679
    • /
    • 2011
  • In this paper, detection algorithm of face region based on skin color of in the TV images is proposed. In the first, reference image is set to the sampled skin color, and then the extracted of face region is candidated using the Euclidean distance between the pixels of TV image. The eye image is detected by using the mean value and standard deviation of the component forming color difference between Y and C through the conversion of RGB color into CMY color model. Detecting the lips image is calculated by utilizing Q component through the conversion of RGB color model into YIQ color space. The detection of the face region is extracted using basis of knowledge by doing logical calculation of the eye image and lips image. To testify the proposed method, some experiments are performed using front color image down loaded from TV color image. Experimental results showed that face region can be detected in both case of the irrespective location & size of the human face.

Face Detection Method based Fusion RetinaNet using RGB-D Image (RGB-D 영상을 이용한 Fusion RetinaNet 기반 얼굴 검출 방법)

  • Nam, Eun-Jeong;Nam, Chung-Hyeon;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.4
    • /
    • pp.519-525
    • /
    • 2022
  • The face detection task of detecting a person's face in an image is used as a preprocess or core process in various image processing-based applications. The neural network models, which have recently been performing well with the development of deep learning, are dependent on 2D images, so if noise occurs in the image, such as poor camera quality or pool focus of the face, the face may not be detected properly. In this paper, we propose a face detection method that uses depth information together to reduce the dependence of 2D images. The proposed model was trained after generating and preprocessing depth information in advance using face detection dataset, and as a result, it was confirmed that the FRN model was 89.16%, which was about 1.2% better than the RetinaNet model, which showed 87.95%.

Caricaturing using Local Warping and Edge Detection (로컬 와핑 및 윤곽선 추출을 이용한 캐리커처 제작)

  • Choi, Sung-Jin;Bae, Hyeon;Kim, Sung-Shin;Woo, Kwang-Bang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.4
    • /
    • pp.403-408
    • /
    • 2003
  • A general meaning of caricaturing is that a representation, especially pictorial or literary, in which the subject's distinctive features or peculiarities are deliberately exaggerated to produce a comic or grotesque effect. In other words, a caricature is defined as a rough sketch(dessin) which is made by detecting features from human face and exaggerating or warping those. There have been developed many methods which can make a caricature image from human face using computer. In this paper, we propose a new caricaturing system. The system uses a real-time image or supplied image as an input image and deals with it on four processing steps and then creates a caricatured image finally. The four Processing steps are like that. The first step is detecting a face from input image. The second step is extracting special coordinate values as facial geometric information. The third step is deforming the face image using local warping method and the coordinate values acquired in the second step. In fourth step, the system transforms the deformed image into the better improved edge image using a fuzzy Sobel method and then creates a caricatured image finally. In this paper , we can realize a caricaturing system which is simpler than any other exiting systems in ways that create a caricatured image and does not need complex algorithms using many image processing methods like image recognition, transformation and edge detection.