• Title/Summary/Keyword: Facial Image

Search Result 820, Processing Time 0.031 seconds

Using Analysis of Major Color Component facial region detection algorithm for real-time image (동영상에서 얼굴의 주색상 밝기 분포를 이용한 실시간 얼굴영역 검출기법)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of Digital Contents Society
    • /
    • v.8 no.3
    • /
    • pp.329-339
    • /
    • 2007
  • In this paper we present a facial region detection algorithm for real-time image with complex background and various illumination using spatial and temporal methods. For Detecting Human region It used summation of Edge-Difference Image between continuous image sequences. Then, Detected facial candidate region is vertically divided two objected. Non facial region is reduced using Analysis of Major Color Component. Non facial region has not available Major Color Component. And then, Background is reduced using boundary information. Finally, The Facial region is detected through horizontal, vertical projection of Images. The experiments show that the proposed algorithm can detect robustly facial region with complex background various illumination images.

  • PDF

Automatic 3D Facial Movement Detection from Mirror-reflected Multi-Image for Facial Expression Modeling (거울 투영 이미지를 이용한 3D 얼굴 표정 변화 자동 검출 및 모델링)

  • Kyung, Kyu-Min;Park, Mignon;Hyun, Chang-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.113-115
    • /
    • 2005
  • This thesis presents a method for 3D modeling of facial expression from frontal and mirror-reflected multi-image. Since the proposed system uses only one camera, two mirrors, and simple mirror's property, it is robust, accurate and inexpensive. In addition, we can avoid the problem of synchronization between data among different cameras. Mirrors located near one's cheeks can reflect the side views of markers on one's face. To optimize our system, we must select feature points of face intimately associated with human's emotions. Therefore we refer to the FDP (Facial Definition Parameters) and FAP (Facial Animation Parameters) defined by MPEG-4 SNHC (Synlhetic/Natural Hybrid Coding). We put colorful dot markers on selected feature points of face to detect movement of facial deformation when subject makes variety expressions. Before computing the 3D coordinates of extracted facial feature points, we properly grouped these points according to relative part. This makes our matching process automatically. We experiment on about twenty koreans the subject of our experiment in their late twenties and early thirties. Finally, we verify the performance of the proposed method tv simulating an animation of 3D facial expression.

  • PDF

A Gaze Tracking based on the Head Pose in Computer Monitor (얼굴 방향에 기반을 둔 컴퓨터 화면 응시점 추적)

  • 오승환;이희영
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.227-230
    • /
    • 2002
  • In this paper we concentrate on overall direction of the gaze based on a head pose for human computer interaction. To decide a gaze direction of user in a image, it is important to pick up facial feature exactly. For this, we binarize the input image and search two eyes and the mouth through the similarity of each block ( aspect ratio, size, and average gray value ) and geometric information of face at the binarized image. We create a imaginary plane on the line made by features of the real face and the pin hole of the camera to decide the head orientation. We call it the virtual facial plane. The position of a virtual facial plane is estimated through projected facial feature on the image plane. We find a gaze direction using the surface normal vector of the virtual facial plane. This study using popular PC camera will contribute practical usage of gaze tracking technology.

  • PDF

Reconstruction of Partially Occluded Facial Image Utilizing KPCA-based Denoising Method (KPCA 기반 노이즈 제거 기법을 이용한 부분 손상된 얼굴 영상의 복원)

  • Kang Daesung;Kim Jongho;Park Jooyoung
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.04a
    • /
    • pp.247-250
    • /
    • 2005
  • In numerous occasions, there is need to reconstruct partially occluded facial image. Typical examples include the recognition of criminals whose facial images are captured by surveillance cameras- ln such cases a significant part of the face is occluded making the process of identification extremely difficult, both for automatic face recognition systems and human observers. To overcome these difficulties, we consider the application of Kernel PCA-based denoising method to partially occluded facial image in this paper.

  • PDF

Emotion Recognition Using Eigenspace

  • Lee, Sang-Yun;Oh, Jae-Heung;Chung, Geun-Ho;Joo, Young-Hoon;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.111.1-111
    • /
    • 2002
  • System configuration 1. First is the image acquisition part 2. Second part is for creating the vector image and for processing the obtained facial image. This part is for finding the facial area from the skin color. To do this, we can first find the skin color area with the highest weight from eigenface that consists of eigenvector. And then, we can create the vector image of eigenface from the obtained facial area. 3. Third is recognition module portion.

  • PDF

Center Position Tracking Enhancement of Eyes and Iris on the Facial Image

  • Chai Duck-hyun;Ryu Kwang-ryol
    • Journal of information and communication convergence engineering
    • /
    • v.3 no.2
    • /
    • pp.110-113
    • /
    • 2005
  • An enhancement of tracking capacity for the centering position of eye and iris on the facial image is presented. A facial image is acquisitioned with a CCD camera to be converted into a binary image. The eye region to be a specified brightness and shapes is used the FRM method using the neighboring five mask areas, and the iris on the eye is tracked with FPDP method. The experimental result shows that the proposed methods lead the centering position tracking capability to be enhanced than the pixel average coordinate values method.

A photo-based realistic facial animation (한 장의 포토기반 실사 수준 얼굴 애니메이션)

  • Kim, Jaehwan;Jeong, Il-Kwon
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2011.05a
    • /
    • pp.51-52
    • /
    • 2011
  • We introduce a novel complete framework for contructing realistic facial animations given just one facial photo as an input in this paper. Our approach is carried on in 2D photo spacem not 3D space. Moreover, we utilize computer vision-based technique (digital matting) as well as conventional image processing methods (image warping and texture synthesis) for expressing more realistic facial animations. Simulated results show that our scheme produces high quality facial animations very efficiently.

  • PDF

Personalized Facial Expression Recognition System using Fuzzy Neural Networks and robust Image Processing (퍼지 신경망과 강인한 영상 처리를 이용한 개인화 얼굴 표정 인식 시스템)

  • 김대진;김종성;변증남
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.25-28
    • /
    • 2002
  • This paper introduce a personalized facial expression recognition system. Many previous works on facial expression recognition system focus on the formal six universal facial expressions. However, it is very difficult to make such expressions for normal person without much effort and training. And in these days, the personalized service is also mainly focused by many researchers in various fields. Thus, we Propose a novel facial expression recognition system with fuzzy neural networks and robust image processing.

  • PDF

Comparision of Mandible Changes on Three-Dimensional Computed Tomography image After Mandibular Surgery in Facial Asymmetry Patients (안면 비대칭 환자의 하악골 수술 후 하악골 변화에 대한 3차원 CT 영상 비교)

  • Kim, Mi-Ryoung;Chin, Byung-Rho
    • Journal of Yeungnam Medical Science
    • /
    • v.25 no.2
    • /
    • pp.108-116
    • /
    • 2008
  • Background : When surgeons plan mandible ortho surgery for patients with skeletal class III facial asymmetry, they must be consider the exact method of surgery for correction of the facial asymmetry. Three-dimensional (3D) CT imaging is efficient in depicting specific structures in the craniofacial area. It reproduces actual measurements by minimizing errors from patient movement and allows for image magnification. Due to the rapid development of digital image technology and the expansion of treatment range, rapid progress has been made in the study of three-dimensional facial skeleton analysis. The purpose of this study was to conduct 3D CT image comparisons of mandible changes after mandibular surgery in facial asymmetry patients. Materials & methods : This study included 7 patients who underwent 3D CT before and after correction of facial asymmetry in the oral and maxillofacial surgery department of Yeungnam University Hospital between August 2002 and November 2005. Patients included 2 males and 5 females, with ages ranging from 16 years to 30 years (average 21.4 years). Frontal CT images were obtained before and after surgery, and changes in mandible angle and length were measured. Results : When we compared the measurements obtained before and after mandibular surgery in facial asymmetry patients, correction of facial asymmetry was identified on the "after" images. The mean difference between the right and left mandibular angles before mandibular surgery was $7^{\circ}$, whereas after mandibular surgery it was $1.5^{\circ}$. The right and left mandibular length ratios subtracted from 1 was 0.114 before mandibular surgery, while it was 0.036 after mandibular surgery. The differences were analyzed using the nonparametric test and the Wilcoxon signed ranks test (p<0.05). Conclusion: The system that has been developed produces an accurate three-dimensional representation of the skull, upon which individualized surgery of the skull and jaws is easily performed. The system also permits accurate measurement and monitoring of postsurgical changes to the face and jaws through reproducible and noninvasive means.

  • PDF

Detection of Facial Feature Regionsby Manipulation of DCT's Coefficients (DCT 계수를 이용한 얼굴 특징 영역의 검출)

  • Lee, Boo-Hyung;Ryu, Jang-Ryeol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.8 no.2
    • /
    • pp.267-272
    • /
    • 2007
  • This paper proposes a new approach fur the detection of facial feature regions using the characteristic of DCT(discrete cosine transformation) thatconcentrates the energy of an image into lower frequency coefficients. Since the facial features are pertained to relatively high frequency in a face image, the inverse DCT after removing the DCT's coefficients corresponding to the lower frequencies generates the image where the facial feature regions are emphasized. Thus the facial regions can be easily segmented from the inversed image using any differential operator. In the segmented region, facial features can be found using face template. The proposed algorithm has been tested with the image MIT's CBCL DB and the Yale facedatabase B. The experimental results have shown superior performance under the variations of image size and lighting condition.

  • PDF