• Title/Summary/Keyword: Face Area

Search Result 1,199, Processing Time 0.031 seconds

Definition of Optimal Face Region for Face Recognition with Phase-Only Correlation (위상 한정 상관법으로 얼굴을 인식하기 위한 최적 얼굴 영역의 정의)

  • Lee, Choong-Ho
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.13 no.3
    • /
    • pp.150-155
    • /
    • 2012
  • POC(Phase-Only Correlation) is a useful method that can conduct face recognition without using feature extraction or eigenface, but uses Fourier transformation for square areas. In this paper, we propose an effective face area to increase the performance of face recognition using POC. Specifically, three areas are experimented for POC. The frist area is the square area that includes head and space. The second area is the square area from ear to ear horizontally and from the end of chin to the forehead vertically. The third area is the square area from the line under the lips to the forehead vertically and from cheek to cheek horizontally. Experimental results show that the second face area has the best advantage among the three types of areas to define the threshold for POC.

Detection Method of Human Face, Facial Components and Rotation Angle Using Color Value and Partial Template (컬러정보와 부분 템플릿을 이용한 얼굴영역, 요소 및 회전각 검출)

  • Lee, Mi-Ae;Park, Ki-Soo
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.465-472
    • /
    • 2003
  • For an effective pre-treatment process of a face input image, it is necessary to detect each of face components, calculate the face area, and estimate the rotary angle of the face. A proposed method of this study can estimate an robust result under such renditions as some different levels of illumination, variable fate sizes, fate rotation angels, and background color similar to skin color of the face. The first step of the proposed method detects the estimated face area that can be calculated by both adapted skin color Information of the band-wide HSV color coordinate converted from RGB coordinate, and skin color Information using histogram. Using the results of the former processes, we can detect a lip area within an estimated face area. After estimating a rotary angle slope of the lip area along the X axis, the method determines the face shape based on face information. After detecting eyes in face area by matching a partial template which is made with both eyes, we can estimate Y axis rotary angle by calculating the eye´s locations in three dimensional space in the reference of the face area. As a result of the experiment on various face images, the effectuality of proposed algorithm was verified.

A New Face Tracking Method Using Block Difference Image and Kalman Filter in Moving Picture (동영상에서 칼만 예측기와 블록 차영상을 이용한 얼굴영역 검출기법)

  • Jang, Hee-Jun;Ko, Hye-Sun;Choi, Young-Woo;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.163-172
    • /
    • 2005
  • When tracking a human face in the moving pictures with complex background under irregular lighting conditions, the detected face can be larger including background or smaller including only a part of the face. Even background can be detected as a face area. To solve these problems, this paper proposes a new face tracking method using a block difference image and a Kalman estimator. The block difference image allows us to detect even a small motion of a human and the face area is selected using the skin color inside the detected motion area. If the pixels with skin color inside the detected motion area, the boundary of the area is represented by a code sequence using the 8-neighbor window and the head area is detected analysing this code. The pixels in the head area is segmented by colors and the region most similar with the skin color is considered as a face area. The detected face area is represented by a rectangle including the area and its four vertices are used as the states of the Kalman estimator to trace the motion of the face area. It is proved by the experiments that the proposed method increases the accuracy of face detection and reduces the fare detection time significantly.

A New Face Detection Method using Combined Features of Color and Edge under the illumination Variance (컬러와 에지정보를 결합한 조명변화에 강인한 얼굴영역 검출방법)

  • 지은미;윤호섭;이상호
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.11
    • /
    • pp.809-817
    • /
    • 2002
  • This paper describes a new face detection method that is a pre-processing algorithm for on-line face recognition. To complement the weakness of using only edge or rotor features from previous face detection method, we propose the two types of face detection method. The one is a combined method with edge and color features and the other is a center area color sampling method. To prevent connecting the people's face area and the background area, which have same colors, we propose a new adaptive edge detection algorithm firstly. The adaptive edge detection algorithm is robust to illumination variance so that it extracts lots of edges and breakouts edges steadily in border between background and face areas. Because of strong edge detection, face area appears one or multi regions. We can merge these isolated regions using color information and get the final face area as a MBR (Minimum Bounding Rectangle) form. If the size of final face area is under or upper threshold, color sampling method in center area from input image is used to detect new face area. To evaluate the proposed method, we have experimented with 2,100 face images. A high face detection rate of 96.3% has been obtained.

Facial Shape Recognition Using Self Organized Feature Map(SOFM)

  • Kim, Seung-Jae;Lee, Jung-Jae
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.104-112
    • /
    • 2019
  • This study proposed a robust detection algorithm. It detects face more stably with respect to changes in light and rotation forthe identification of a face shape. The proposed algorithm uses face shape asinput information in a single camera environment and divides only face area through preprocessing process. However, it is not easy to accurately recognize the face area that is sensitive to lighting changes and has a large degree of freedom, and the error range is large. In this paper, we separated the background and face area using the brightness difference of the two images to increase the recognition rate. The brightness difference between the two images means the difference between the images taken under the bright light and the images taken under the dark light. After separating only the face region, the face shape is recognized by using the self-organization feature map (SOFM) algorithm. SOFM first selects the first top neuron through the learning process. Second, the highest neuron is renewed by competing again between the highest neuron and neighboring neurons through the competition process. Third, the final top neuron is selected by repeating the learning process and the competition process. In addition, the competition will go through a three-step learning process to ensure that the top neurons are updated well among neurons. By using these SOFM neural network algorithms, we intend to implement a stable and robust real-time face shape recognition system in face shape recognition.

Implicit Distinction of the Race underlying the Perception of Faces by Event-Related fMRI

  • Kim, Jeong-Seok;Kim, Bum-Soo;Jeun, Sin-Soo;Lee, Kang-Hee;Jung, So-Lyung;Choe, Bo-Young
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2004.11a
    • /
    • pp.49-52
    • /
    • 2004
  • A few studies have shown that the function of fusiform face area is selectively involved in the perception of faces including a race difference. We investigated the neural substrates of the face-selective region called fusiform face area In the ventral occipital-temporal cortex and same-race memory superiority In the fusiform face area by the event-related fMRI. In our fMRI study, twelve healthy subjects (Oriental-Korean) performed the implicit distinction of the race while they consciously made familiar-judgments, regardless of whether they considered a face as Oriental-Korean or European-American. In the race distinction as an implicit task, the fusiform face areas (FFA) and the right parahippocampal gyrus had a greater response to the presentation of Oriental-Korean than European-American faces, but in the consciously race distinction between Oriental-Korean and European-American faces, any significant difference in the FFA was not observed. These results suggest that different activation in the fusiform regions and right parahippocampal gyrus resulting from same-race memory superiority could be implicitly taken place by the physiological processes of face recognition.

  • PDF

A Fast Method for Face Detection Based on PCA and SVM (PCA와 SVM에 기반하는 빠른 얼굴탐지 방법)

  • Xia, Chun-Lei;Shin, Hyeon-Gab;Park, Myeong-Chul;Ha, Seok-Wun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.6
    • /
    • pp.1129-1135
    • /
    • 2007
  • Human face detection technique plays an important role in computer vision area. It has lots of applications such as face recognition, video surveillance, human computer interface, face image database management, and querying image databases. In this paper, a fast face detection approach using Principal Component Analysis (PCA) and Support Vector Machines (SVM) is proposed based on the previous study on face detection technique. In the proposed detection system, firstly it filter the face potential area using statistical feature which is generated by analyzing the local histogram distribution the detection process is speeded up by eliminating most of the non-face area in this step. In the next step, PCA feature vectors are generated, and then detect whether there are faces present in the test image using SVM classifier. Finally, store the detection results and output the results on the test image. The test images in this paper are from CMU face database. The face and non-face samples are selected from the MIT data set. The experimental results indicate the proposed method has good performance for face detection.

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

  • Lee, Tai-Gun;Park, Sung-Kee;Kim, Mun-Sang;Park, Mig-Non
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.783-788
    • /
    • 2003
  • In this paper, we present a new approach to detect and recognize human face in the image from vision camera equipped on the mobile robot platform. Due to the mobility of camera platform, obtained facial image is small and pose-various. For this condition, new algorithm should cope with these constraints and can detect and recognize face in nearly real time. In detection step, ‘coarse to fine’ detection strategy is used. Firstly, region boundary including face is roughly located by dual ellipse templates of facial color and on this region, the locations of three main facial features- two eyes and mouth-are estimated. For this, simplified facial feature maps using characteristic chrominance are made out and candidate pixels are segmented as eye or mouth pixels group. These candidate facial features are verified whether the length and orientation of feature pairs are suitable for face geometry. In recognition step, pseudo-convex hull area of gray face image is defined which area includes feature triangle connecting two eyes and mouth. And random lattice line set are composed and laid on this convex hull area, and then 2D appearance of this area is represented. From these procedures, facial information of detected face is obtained and face DB images are similarly processed for each person class. Based on facial information of these areas, distance measure of match of lattice lines is calculated and face image is recognized using this measure as a classifier. This proposed detection and recognition algorithms overcome the constraints of previous approach [15], make real-time face detection and recognition possible, and guarantee the correct recognition irregardless of some pose variation of face. The usefulness at mobile robot application is demonstrated.

  • PDF

Development of Virtual Makeup Tool based on Mobile Augmented Reality

  • Song, Mi-Young;Kim, Young-Sun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.127-133
    • /
    • 2021
  • In this study, an augmented reality-based make-up tool was built to analyze the user's face shape based on face-type reference model data and to provide virtual makeup by providing face-type makeup. To analyze the face shape, first recognize the face from the image captured by the camera, then extract the features of the face contour area and use them as analysis properties. Next, the feature points of the extracted face contour area are normalized to compare with the contour area characteristics of each face reference model data. Face shape is predicted and analyzed using the distance difference between the feature points of the normalized contour area and the feature points of the each face-type reference model data. In augmented reality-based virtual makeup, in the image input from the camera, the face is recognized in real time to extract the features of each area of the face. Through the face-type analysis process, you can check the results of virtual makeup by providing makeup that matches the analyzed face shape. Through the proposed system, We expect cosmetics consumers to check the makeup design that suits them and have a convenient and impact on their decision to purchase cosmetics. It will also help you create an attractive self-image by applying facial makeup to your virtual self.

Face Region Detection Algorithm using Euclidean Distance of Color-Image (칼라 영상에서 유클리디안 거리를 이용한 얼굴영역 검출 알고리즘)

  • Jung, Haing-sup;Lee, Joo-shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.79-86
    • /
    • 2009
  • This study proposed a method of detecting the facial area by calculating Euclidian distances among skin color elements and extracting the characteristics of the face. The proposed algorithm is composed of light calibration and face detection. The light calibration process performs calibration for the change of light. The face detection process extracts the area of skin color by calculating Euclidian distances to the input images using as characteristic vectors color and chroma in 20 skin color sample images. From the extracted facial area candidate, the eyes were detected in space C of color model CMY, and the mouth was detected in space Q of color model YIQ. From the extracted facial area candidate, the facial area was detected based on the knowledge of an ordinary face. When an experiment was conducted with 40 color images of face as input images, the method showed a face detection rate of 100%.

  • PDF