• Title/Summary/Keyword: Facial images

Search Result 632, Processing Time 0.023 seconds

A COMPARATIVE STUDY OF THREE DIMENSIONAL RECONSTRUCTIVE IMAGES USING COMPUTED TOMOGRAMS OF FACIAL BONE INJURIES (안면골 외상환자의 전산화단층상을 이용한 삼차원재구성상의 비교연구)

  • Choi Eun-Suk;Koh Kwang-Joon
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.24 no.2
    • /
    • pp.413-423
    • /
    • 1994
  • The purpose of this study was to clarify the spatial relationship in presurgical examination and to aid surgical planning and postoperative evaluation of patients with facial bone injury. For this study, three-dimensional images of facial bone fracture were reconstructed by computed image analysis system and three-dimensional reconstructive program integrated in computed tomography. The obtained results were as follows: 1. Serial conventional computed tomograms were value in accurately depicting the facial bone injuries and three-dimensional reconstructive images demonstrated an overall look. 2. The degree of deterioration of spatial resolution was proportional to the thickness of the slice. 3. Facial bone fractures were the most distinctly demonstrated on inferoanterior views of three-dimensional reconstructive images. 4. Although three-dimensional reconstructive images made diagnosis of fracture lines, it was difficult to identify maxillary fractures. 5. The diagnosis of zygomatic fractures could be made equally well with computed image analysis system and three-dimensional reconstructive program integrated in computed tomography. 6. The diagnosis of mandibular fractures could be made equally well with computed image analysis system and three-dimensional reconstructive program integrated in computed tomography.

  • PDF

A Study on Facial Feature' Morphological Information Extraction and Classification for Avatar Generation (아바타 생성을 위한 이목구비 모양 특징정보 추출 및 분류에 관한 연구)

  • 박연출
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.10
    • /
    • pp.631-642
    • /
    • 2003
  • We propose an approach to extract and to classify facial features into some classes from one's photo as prepared classification standards to generate one's avatar. Facial Feature Extraction and Classification was executed at eyes, nose, lips, jaw separately and I presented each facial features and classification standards. Extracted Facial Features are used for calculation to features of professional designer's facial component images. Then, most similar facial component images are mapped onto avatar's vector face.

  • PDF

Detection of Facial Region and features from Color Images based on Skin Color and Deformable Model (스킨 컬러와 변형 모델에 기반한 컬러영상으로부터의 얼굴 및 얼굴 특성영역 추출)

  • 민경필;전준철;박구락
    • Journal of Internet Computing and Services
    • /
    • v.3 no.6
    • /
    • pp.13-24
    • /
    • 2002
  • This paper presents an automatic approach to detect face and facial feature from face images based on the color information and deformable model. Skin color information has been widely used for face and facial feature diction since it is effective for object recognition and has less computational burden, In this paper, we propose how to compensates varying light condition and utilize the transformed YCbCr color model to detect candidates region of face and facial feature from color images, Moreover, the detected face facial feature areas are subsequently assigned to a initial condition of active contour model to extract optimal boundaries of face and facial feature by resolving initial boundary problem when the active contour is used, The experimental results show the efficiency of the proposed method, The face and facial feature information will be used for face recognition and facial feature descriptor.

  • PDF

Face recognition using a sparse population coding model for receptive field formation of the simple cells in the primary visual cortex (주 시각피질에서의 단순세포 수용영역 형성에 대한 성긴 집단부호 모델을 이용한 얼굴이식)

  • 김종규;장주석;김영일
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.10
    • /
    • pp.43-50
    • /
    • 1997
  • In this paper, we present a method that can recognize face images by use of a sparse population code that is a learning model about a receptive fields of the simple cells in the primary visual cortex. Twenty front-view facial images form twenty persons were used for the training process, and 200 varied facial images, 20 per person, were used for test. The correct recognition rate was 100% for only the front-view test facial images, which include the images either with spectacles or of various expressions, while it was 90% in average for the total input images that include rotated faces. We analyzed the effect of nonlinear functon that determine the sparseness, and compared recognition rate using the sparese population code with that using eigenvectors (eigenfaces), which is compact code that makes contrast with the sparse population code.

  • PDF

Image-based Realistic Facial Expression Animation

  • Yang, Hyun-S.;Han, Tae-Woo;Lee, Ju-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.133-140
    • /
    • 1999
  • In this paper, we propose a method of image-based three-dimensional modeling for realistic facial expression. In the proposed method, real human facial images are used to deform a generic three-dimensional mesh model and the deformed model is animated to generate facial expression animation. First, we take several pictures of the same person from several view angles. Then we project a three-dimensional face model onto the plane of each facial image and match the projected model with each image. The results are combined to generate a deformed three-dimensional model. We use the feature-based image metamorphosis to match the projected models with images. We then create a synthetic image from the two-dimensional images of a specific person's face. This synthetic image is texture-mapped to the cylindrical projection of the three-dimensional model. We also propose a muscle-based animation technique to generate realistic facial expression animations. This method facilitates the control of the animation. lastly, we show the animation results of the six represenative facial expressions.

Effect of frontal facial type and sex on preferred chin projection

  • Choi, Jin-Young;Kim, Taeyun;Kim, Hyung-Mo;Lee, Sang-Hoon;Cho, Il-sik;Baek, Seung-Hak
    • The korean journal of orthodontics
    • /
    • v.47 no.2
    • /
    • pp.108-117
    • /
    • 2017
  • Objective: To investigate the effects of frontal facial type (FFT) and sex on preferred chin projection (CP) in three-dimensional (3D) facial images. Methods: Six 3D facial images were acquired using a 3D facial scanner (euryprosopic [Eury-FFT], mesoprosopic [Meso-FFT], and leptoprosopic [Lepto-FFT] for each sex). After normal CP in each 3D facial image was set to $10^{\circ}$ of the facial profile angle (glabella-subnasale-pogonion), CPs were morphed by gradations of $2^{\circ}$ from normal (moderately protrusive [$6^{\circ}$], slightly protrusive [$8^{\circ}$], slightly retrusive [$12^{\circ}$], and moderately retrusive [$14^{\circ}$]). Seventy-five dental students (48 men and 27 women) were asked to rate the CPs ($6^{\circ}$, $8^{\circ}$, $10^{\circ}$, $12^{\circ}$, and $14^{\circ}$) from the most to least preferred in each 3D image. Statistical analyses included the Kolmogorov-Smirnov test, Kruskal-Wallis test, and Bonferroni correction. Results: No significant difference was observed in the distribution of preferred CP in the same FFT between male and female evaluators. In Meso-FFT, the normal CP was the most preferred without any sex difference. However, in Eury-FFT, the slightly protrusive CP was favored in male 3D images, but the normal CP was preferred in female 3D images. In Lepto-FFT, the normal CP was favored in male 3D images, whereas the slightly retrusive CP was favored in female 3D images. The mean preferred CP angle differed significantly according to FFT (Eury-FFT: male, $8.7^{\circ}$, female, $9.9^{\circ}$; Meso-FFT: male, $9.8^{\circ}$, female, $10.7^{\circ}$; Lepto-FFT: male, $10.8^{\circ}$, female, $11.4^{\circ}$; p < 0.001). Conclusions: Our findings might serve as guidelines for setting the preferred CP according to FFT and sex.

Detection of Facial Direction using Facial Features (얼굴 특징 정보를 이용한 얼굴 방향성 검출)

  • Park Ji-Sook;Dong Ji-Youn
    • Journal of Internet Computing and Services
    • /
    • v.4 no.6
    • /
    • pp.57-67
    • /
    • 2003
  • The recent rapid development of multimedia and optical technologies brings great attention to application systems to process facial Image features. The previous research efforts in facial image processing have been mainly focused on the recognition of human face and facial expression analysis, using front face images. Not much research has been carried out Into image-based detection of face direction. Moreover, the existing approaches to detect face direction, which normally use the sequential Images captured by a single camera, have limitations that the frontal image must be given first before any other images. In this paper, we propose a method to detect face direction by using facial features such as facial trapezoid which is defined by two eyes and the lower lip. Specifically, the proposed method forms a facial direction formula, which is defined with statistical data about the ratio of the right and left area in the facial trapezoid, to identify whether the face is directed toward the right or the left. The proposed method can be effectively used for automatic photo arrangement systems that will often need to set the different left or right margin of a photo according to the face direction of a person in the photo.

  • PDF

Face and Facial Feature Detection under Pose Variation of User Face for Human-Robot Interaction (인간-로봇 상호작용을 위한 자세가 변하는 사용자 얼굴검출 및 얼굴요소 위치추정)

  • Park Sung-Kee;Park Mignon;Lee Taigun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.1
    • /
    • pp.50-57
    • /
    • 2005
  • We present a simple and effective method of face and facial feature detection under pose variation of user face in complex background for the human-robot interaction. Our approach is a flexible method that can be performed in both color and gray facial image and is also feasible for detecting facial features in quasi real-time. Based on the characteristics of the intensity of neighborhood area of facial features, new directional template for facial feature is defined. From applying this template to input facial image, novel edge-like blob map (EBM) with multiple intensity strengths is constructed. Regardless of color information of input image, using this map and conditions for facial characteristics, we show that the locations of face and its features - i.e., two eyes and a mouth-can be successfully estimated. Without the information of facial area boundary, final candidate face region is determined by both obtained locations of facial features and weighted correlation values with standard facial templates. Experimental results from many color images and well-known gray level face database images authorize the usefulness of proposed algorithm.

The Influence of the Eyebrow Make-up on Facial Image (눈썹화장이 얼굴이미지에 미치는 영향)

  • Gang, Eun-Ju
    • Journal of the Korean Society of Fashion and Beauty
    • /
    • v.3 no.2 s.2
    • /
    • pp.31-38
    • /
    • 2005
  • Make-up changes facial images. In particular, eyebrow make-up is a part of changing expression most easily and effectively. While color make-up is helpful to produce women's desired image with their favorite colors, eyebrow make-up is hidden actor to give a clear impression to others. Therefore, this study connected facial type which is an important factor deciding facial image with eyebrow, examined image of eyebrow make-up and that changed by facial types and aimed to be helpful in producing more effective facial image with eyebrow make-up considering one's facial type. Consequently, it was found that eyebrow make-up was a great factor in making better facial impression and image and complementing the weakness of facial type. h strong impression of facial type can be changed into soft shape or foolish shape in worse case depending on the types of eyebrow make-up. Eyebrow make-up shows charming image as angle of eyebrow is steep, heavy image as eyebrow is horizontal, cold image as eyebrow tail rises and simple and dull image as it lowers. Therefore, it is known that image of eyebrow make-up can be governed by several factors including angle and direction of eyebrow. Consequently, it is thought that most effective eyebrow make-up considers individual facial types, images of their eyes, noses and mouths and factors deciding angle, direction and colors of eyebrow.

  • PDF

Emotion Recognition Method of Facial Image using PCA (PCA을 이용한 얼굴 표정의 감정 인식 방법)

  • Kim, Ho-Duck;Yang, Hyun-Chang;Park, Chang-Hyun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.6
    • /
    • pp.772-776
    • /
    • 2006
  • A research about facial image recognition is studied in the most of images in a full race. A representative part, effecting a facial image recognition, is eyes and a mouth. So, facial image recognition researchers have studied under the central eyes, eyebrows, and mouths on the facial images. But most people in front of a camera in everyday life are difficult to recognize a fast change of pupils. And people wear glasses. So, in this paper, we try using Principal Component Analysis(PCA) for facial image recognition in blindfold case.