• Title/Summary/Keyword: Facial images

Search Result 637, Processing Time 0.027 seconds

Web-based 3D Face Modeling System (웹기반 3차원 얼굴 모델링 시스템)

  • 김응곤;송승헌
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.3
    • /
    • pp.427-433
    • /
    • 2001
  • This paper proposes a web-based 3 dimensional face modeling system that makes a realistic facial model efficiently without any 30 scanner or camera that uses in the traditional methods. Without expensive image-input equipments, we can easily create 3B models only using front and side images. The system is available to make 3D facial models as we connect to the facial modeling server on the WWW which is independent from specific platforms and softwares. This system will be implemented using Java 3D API, which includes the functions and conveniences of developed graphic libraries. It is a Client/server architecture which consists of user connection module and 3D facial model creating module. Clients connect with the facial modeling server, input two facial photographic images, detects the feature points, and then create a 3D facial model modifying generic facial model with the points according to the procedures using only the web browser.

  • PDF

Reconstruction of High-Resolution Facial Image Based on Recursive Error Back-Projection of Top-Down Machine Learning (하향식 기계학습의 반복적 오차 역투영에 기반한 고해상도 얼굴 영상의 복원)

  • Park, Jeong-Seon;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.3
    • /
    • pp.266-274
    • /
    • 2007
  • This paper proposes a new reconstruction method of high-resolution facial image from a low-resolution facial image based on top-down machine learning and recursive error back-projection. A face is represented by a linear combination of prototypes of shape and that of texture. With the shape and texture information of each pixel in a given low-resolution facial image, we can estimate optimal coefficients for a linear combination of prototypes of shape and those that of texture by solving least square minimizations. Then high-resolution facial image can be obtained by using the optimal coefficients for linear combination of the high-resolution prototypes. In addition, a recursive error back-projection procedure is applied to improve the reconstruction accuracy of high-resolution facial image. The encouraging results of the proposed method show that our method can be used to improve the performance of the face recognition by applying our method to reconstruct high-resolution facial images from low-resolution images captured at a distance.

Quantitative Analysis of the Facial Nerve Using Contrast-Enhanced Three Dimensional FLAIR-VISTA Imaging in Pediatric Bell's Palsy

  • Seo, Jin Hee;You, Sun Kyoung;Lee, In Ho;Lee, Jeong Eun;Lee, So Mi;Cho, Hyun-Hae
    • Investigative Magnetic Resonance Imaging
    • /
    • v.19 no.3
    • /
    • pp.162-167
    • /
    • 2015
  • Purpose: To evaluate the usefulness of quantitative analysis of the facial nerve using contrast-enhanced three-dimensional (CE 3D) fluid-attenuated inversion recovery-volume isotopic turbo spin echo acquisition (FLAIR-VISTA) for the diagnosis of Bell's palsy in pediatric patients. Materials and Methods: Twelve patients (24 nerves) with unilateral acute facial nerve palsy underwent MRI from March 2014 through March 2015. The unaffected sides were included as a control group. First, for quantitative analysis, the signal intensity (SI) and relative SI (RSI) for canalicular, labyrinthine, geniculate ganglion, tympanic, and mastoid segments of the facial nerve on CE 3D FLAIR images were measured using regions of interest (ROI). Second, CE 3D FLAIR and CE T1-SE images were analyzed to compare their diagnostic performance by visual assessment (VA). The sensitivity, specificity, and accuracy of RSI measurement and VA were compared. Results: The absolute SI of canalicular and mastoid segments and the sum of the five mean SI (total SI) were higher in the palsy group than in the control group, but with no significant differences. The RSI of the canalicular segment and the total SI were significantly correlated with the symptomatic side (P = 0.028 and 0.015). In 11/12 (91.6%) patients, the RSI of total SI resulted in accurate detection of the affected side. The sensitivity, specificity, and accuracy for detecting Bell's palsy were higher with RSI measurement than with VA of CE 3D FLAIR images, while those with VA of CE T1-SE images were higher than those with VA of CE 3D FLAIR images. Conclusion: Quantitative analysis of the facial nerve using CE 3D FLAIR imaging can be useful for increasing the diagnostic performance in children with Bell's palsy when difficult to diagnose using VA alone. With regard to VA, the diagnostic performance of CE T1-SE imaging is superior to that of CE 3D FLAIR imaging in children. Further studies including larger populations are necessary.

Facial Expression Recognition using 1D Transform Features and Hidden Markov Model

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.4
    • /
    • pp.1657-1662
    • /
    • 2017
  • Facial expression recognition systems using video devices have emerged as an important component of natural human-machine interfaces which contribute to various practical applications such as security systems, behavioral science and clinical practices. In this work, we present a new method to analyze, represent and recognize human facial expressions using a sequence of facial images. Under our proposed facial expression recognition framework, the overall procedure includes: accurate face detection to remove background and noise effects from the raw image sequences and align each image using vertex mask generation. Furthermore, these features are reduced by principal component analysis. Finally, these augmented features are trained and tested using Hidden Markov Model (HMM). The experimental evaluation demonstrated the proposed approach over two public datasets such as Cohn-Kanade and AT&T datasets of facial expression videos that achieved expression recognition results as 96.75% and 96.92%. Besides, the recognition results show the superiority of the proposed approach over the state of the art methods.

Reconstruction of High-Resolution Facial Image Based on A Recursive Error Back-Projection

  • Park, Joeng-Seon;Lee, Seong-Whan
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.715-717
    • /
    • 2004
  • This paper proposes a new reconstruction method of high-resolution facial image from a low-resolution facial image based on a recursive error back-projection of top-down machine learning. A face is represented by a linear combination of prototypes of shape and texture. With the shape and texture information about the pixels in a given low-resolution facial image, we can estimate optimal coefficients for a linear combination of prototypes of shape and those of texture by solving least square minimization. Then high-resolution facial image can be obtained by using the optimal coefficients for linear combination of the high-resolution prototypes, In addition to, a recursive error back-projection is applied to improve the accuracy of synthesized high-resolution facial image. The encouraging results of the proposed method show that our method can be used to improve the performance of the face recognition by applying our method to reconstruct high-resolution facial images from low-resolution one captured at a distance.

  • PDF

A 3D Face Reconstruction Based on the Symmetrical Characteristics of Side View 2D Face Images (측면 2차원 얼굴 영상들의 대칭성을 이용한 3차원 얼굴 복원)

  • Lee, Sung-Joo;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.103-110
    • /
    • 2011
  • A widely used 3D face reconstruction method, structure from motion(SfM), shows robust performance when frontal, left, and right face images are used. However, this method cannot reconstruct a self-occluded facial part correctly when only one side view face images are used because only partial facial feature points can be used in this case. In order to solve the problem, the proposed method exploit a constrain that is bilateral symmetry of human faces in order to generate bilateral facial feature points and use both input facial feature points and generated facial feature points to reconstruct a 3D face. For quantitative evaluation of the proposed method, 3D faces were obtained from a 3D face scanner and compared with the reconstructed 3D faces. The experimental results show that the proposed 3D face reconstruction method based on both facial feature points outperforms the previous 3D face reconstruction method based on only partial facial feature points.

A Study on the Face Image to Shape Differences and Make up (얼굴의 형태적 특성과 메이크업에 의한 얼굴 이미지 연구)

  • Song, Mi-Young;Park, Oak-Reon;Lee, Young-Ju
    • Korean Journal of Human Ecology
    • /
    • v.14 no.1
    • /
    • pp.143-153
    • /
    • 2005
  • The purpose of this research is to study face images according to the difference of facial shape and make-up. A variety of face images can be formulated by computer graphic simulation, combining numerously different facial shapes and make-up styles. In order to check out the diverse images by make-up styles, we applied five forms of eye brows, two types of eye shadows, and three lip shapes to the round-shaped face of a model. The question sheet, used with a operational stimulant in the experiment, contained 28 articles, composed of a pair of bi-ended adjective in 7 point scale. Data were analyzed using Varimax perpendicular rotation method, Duncan's Multiple Range Test, and Three-way ANOVA. After comparing various results of make-up application to various face types, we could find that facial shape, eye-brows, eye-shadow, and lip shapes influence interactively on total facial images. As a result of make-up image perception analyses, a factor structure was divided into mildness, modernness, elegance, and sociableness. Speaking of make-up image in terms of those factors, round form make-up style showed the highest level of mildness. Upward and straight style of make-up had the highest of modernness. Elegance level went highest when eye shadow style was round form and lip style was straight. Lastly, an incurve lip make-up style showed the highest of sociableness.

  • PDF

A Study on Facial Wrinkle Detection using Active Appearance Models (AAM을 이용한 얼굴 주름 검출에 관한 연구)

  • Lee, Sang-Bum;Kim, Tae-Mook
    • Journal of Digital Convergence
    • /
    • v.12 no.7
    • /
    • pp.239-245
    • /
    • 2014
  • In this paper, a weighted value wrinkle detection method is suggested based on the analysis on the entire facial features such as face contour, face size, eyes and ears. Firstly, the main facial elements are detected with AAM method entirely from the input screen images. Such elements are mainly composed of shape-based and appearance methods. These are used for learning the facial model and for matching the face from new screen images based on the learned models. Secondly, the face and background are separated in the screen image. Four points with the biggest possibilities for wrinkling are selected from the face and high wrinkle weighted values are assigned to them. Finally, the wrinkles are detected by applying Canny edge algorithm for the interested points of weighted value. The suggested algorithm adopts various screen images for experiment. The experiments display the excellent results of face and wrinkle detection in the most of the screen images.

Facial Recognition Algorithm Based on Edge Detection and Discrete Wavelet Transform

  • Chang, Min-Hyuk;Oh, Mi-Suk;Lim, Chun-Hwan;Ahmad, Muhammad-Bilal;Park, Jong-An
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.3 no.4
    • /
    • pp.283-288
    • /
    • 2001
  • In this paper, we proposed a method for extracting facial characteristics of human being in an image. Given a pair of gray level sample images taken with and without human being, the face of human being is segmented from the image. Noise in the input images is removed with the help of Gaussian filters. Edge maps are found of the two input images. The binary edge differential image is obtained from the difference of the two input edge maps. A mask for face detection is made from the process of erosion followed by dilation on the resulting binary edge differential image. This mask is used to extract the human being from the two input image sequences. Features of face are extracted from the segmented image. An effective recognition system using the discrete wave let transform (DWT) is used for recognition. For extracting the facial features, such as eyebrows, eyes, nose and mouth, edge detector is applied on the segmented face image. The area of eye and the center of face are found from horizontal and vertical components of the edge map of the segmented image. other facial features are obtained from edge information of the image. The characteristic vectors are extrated from DWT of the segmented face image. These characteristic vectors are normalized between +1 and -1, and are used as input vectors for the neural network. Simulation results show recognition rate of 100% on the learned system, and about 92% on the test images.

  • PDF

Face Region Detection Algorithm using Euclidean Distance of Color-Image (칼라 영상에서 유클리디안 거리를 이용한 얼굴영역 검출 알고리즘)

  • Jung, Haing-sup;Lee, Joo-shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.79-86
    • /
    • 2009
  • This study proposed a method of detecting the facial area by calculating Euclidian distances among skin color elements and extracting the characteristics of the face. The proposed algorithm is composed of light calibration and face detection. The light calibration process performs calibration for the change of light. The face detection process extracts the area of skin color by calculating Euclidian distances to the input images using as characteristic vectors color and chroma in 20 skin color sample images. From the extracted facial area candidate, the eyes were detected in space C of color model CMY, and the mouth was detected in space Q of color model YIQ. From the extracted facial area candidate, the facial area was detected based on the knowledge of an ordinary face. When an experiment was conducted with 40 color images of face as input images, the method showed a face detection rate of 100%.

  • PDF