• Title/Summary/Keyword: Facial Image Processing

Search Result 157, Processing Time 0.026 seconds

Development of Facial Nerve Palsy Grading System with Image Processing (영상처리를 이용한 안면신경마비 평가시스템 개발)

  • Jang, Min;Shin, Sang-Hoon
    • The Journal of the Society of Korean Medicine Diagnostics
    • /
    • v.17 no.3
    • /
    • pp.233-240
    • /
    • 2013
  • Objectives The objective and universal grading system for the facial nerve palsy is needed to the objectification of treatment in Oriental medicine. In this study, the facial nerve palsy grading was developed with combination of image processing technique and Nottingham scale. Methods The developed system is composed of measurement part, image processing part, facial nerve palsy evaluation part, and display part. With the video data recorded by webcam at measurement part, the positions of marker were measured at image processing part. In evaluation part, Nottingham scales were calculated in four different facial expressions with measured marker position. The video of facial movement, time history of marker position, and Nottingham scale were displayed in display part. Results & Conclusion The developed system was applied to a normal subject and a abnormal subject with facial nerve palsy. The left-right difference of Nottingham scores was large in the abnormal compared with the normal. In normal case, the change of the length between supraorbital point and infraorbital point was larger than that of the length between lateral canthus and angle of mouth. The abnormal case showed an opposite result. The developed system showed the possibilities of the objective and universal grading system for the facial nerve palsy.

Recognition of Human Facial Expression in a Video Image using the Active Appearance Model

  • Jo, Gyeong-Sic;Kim, Yong-Guk
    • Journal of Information Processing Systems
    • /
    • v.6 no.2
    • /
    • pp.261-268
    • /
    • 2010
  • Tracking human facial expression within a video image has many useful applications, such as surveillance and teleconferencing, etc. Initially, the Active Appearance Model (AAM) was proposed for facial recognition; however, it turns out that the AAM has many advantages as regards continuous facial expression recognition. We have implemented a continuous facial expression recognition system using the AAM. In this study, we adopt an independent AAM using the Inverse Compositional Image Alignment method. The system was evaluated using the standard Cohn-Kanade facial expression database, the results of which show that it could have numerous potential applications.

Personalized Facial Expression Recognition System using Fuzzy Neural Networks and robust Image Processing (퍼지 신경망과 강인한 영상 처리를 이용한 개인화 얼굴 표정 인식 시스템)

  • 김대진;김종성;변증남
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.25-28
    • /
    • 2002
  • This paper introduce a personalized facial expression recognition system. Many previous works on facial expression recognition system focus on the formal six universal facial expressions. However, it is very difficult to make such expressions for normal person without much effort and training. And in these days, the personalized service is also mainly focused by many researchers in various fields. Thus, we Propose a novel facial expression recognition system with fuzzy neural networks and robust image processing.

  • PDF

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

Detection of Facial Direction for Automatic Image Arrangement (이미지 자동배치를 위한 얼굴 방향성 검출)

  • 동지연;박지숙;이환용
    • Journal of Information Technology Applications and Management
    • /
    • v.10 no.4
    • /
    • pp.135-147
    • /
    • 2003
  • With the development of multimedia and optical technologies, application systems with facial features hare been increased the interests of researchers, recently. The previous research efforts in face processing mainly use the frontal images in order to recognize human face visually and to extract the facial expression. However, applications, such as image database systems which support queries based on the facial direction and image arrangement systems which place facial images automatically on digital albums, deal with the directional characteristics of a face. In this paper, we propose a method to detect facial directions by using facial features. In the proposed method, the facial trapezoid is defined by detecting points for eyes and a lower lip. Then, the facial direction formula, which calculates the right and left facial direction, is defined by the statistical data about the ratio of the right and left area in facial trapezoids. The proposed method can give an accurate estimate of horizontal rotation of a face within an error tolerance of $\pm1.31$ degree and takes an average execution time of 3.16 sec.

  • PDF

Facial Image Synthesis Considering Illumination Variations on Mobile Devices (모바일 기기에서 조명 변화를 고려한 얼굴 영상 합성)

  • Kwon, Ji-In;Lee, Sang-Hoon;Choi, Soo-Mi
    • Journal of the HCI Society of Korea
    • /
    • v.6 no.1
    • /
    • pp.21-26
    • /
    • 2011
  • This paper presents a robust method for facial image synthesis under varying illumination by combining illumination correction and Poisson image processing techniques. The presented method automatically detects skin area and corrects highly saturated regions that can cause bad effects on the final synthesis image. The developed method can be applied to various facial synthesis applications by correcting illumination variations that can occur frequently on photos taken with a camera phone.

  • PDF

Development of Facial Palsy Grading System with Three Dimensional Image Processing (3차원 영상처리를 이용한 안면마비 평가시스템 개발)

  • Jang, M.;Shin, S.H.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.9 no.2
    • /
    • pp.129-135
    • /
    • 2015
  • The objective grading system for the facial palsy is needed. In this study, the facial palsy grading system was developed with combination of three dimensional image processing and Nottingham scale. The developed system is composed of 4 parts; measurement part, image processing part, computational part, facial palsy evaluation & display part. Two web cam were used to get images. The 8 marker on face were recognized at image processing part. The absolute three dimensional positions of markers were calculated at computational part. Finally, Nottingham scale was calculated and displayed at facial palsy evaluation & display part. The effects of measurement method and position of subject on Nottingham scale were tested. The markers were measured with 2-dimension and 3-dimension. The subject was look at the camera with $0^{\circ}$ and $11^{\circ}$ rotation. The change of Scale was large in the case of $11^{\circ}$ rotation with 2-dimension measurement. So, the developed system with 3-dimension measurement is robust to the orientation change of subject. The developed system showed the robustness of grading error originated from subject posture.

  • PDF

Facial Expression Recognition Method Based on Residual Masking Reconstruction Network

  • Jianing Shen;Hongmei Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.3
    • /
    • pp.323-333
    • /
    • 2023
  • Facial expression recognition can aid in the development of fatigue driving detection, teaching quality evaluation, and other fields. In this study, a facial expression recognition method was proposed with a residual masking reconstruction network as its backbone to achieve more efficient expression recognition and classification. The residual layer was used to acquire and capture the information features of the input image, and the masking layer was used for the weight coefficients corresponding to different information features to achieve accurate and effective image analysis for images of different sizes. To further improve the performance of expression analysis, the loss function of the model is optimized from two aspects, feature dimension and data dimension, to enhance the accurate mapping relationship between facial features and emotional labels. The simulation results show that the ROC of the proposed method was maintained above 0.9995, which can accurately distinguish different expressions. The precision was 75.98%, indicating excellent performance of the facial expression recognition model.

A Facial Feature Area Extraction Method for Improving Face Recognition Rate in Camera Image (일반 카메라 영상에서의 얼굴 인식률 향상을 위한 얼굴 특징 영역 추출 방법)

  • Kim, Seong-Hoon;Han, Gi-Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.5
    • /
    • pp.251-260
    • /
    • 2016
  • Face recognition is a technology to extract feature from a facial image, learn the features through various algorithms, and recognize a person by comparing the learned data with feature of a new facial image. Especially, in order to improve the rate of face recognition, face recognition requires various processing methods. In the training stage of face recognition, feature should be extracted from a facial image. As for the existing method of extracting facial feature, linear discriminant analysis (LDA) is being mainly used. The LDA method is to express a facial image with dots on the high-dimensional space, and extract facial feature to distinguish a person by analyzing the class information and the distribution of dots. As the position of a dot is determined by pixel values of a facial image on the high-dimensional space, if unnecessary areas or frequently changing areas are included on a facial image, incorrect facial feature could be extracted by LDA. Especially, if a camera image is used for face recognition, the size of a face could vary with the distance between the face and the camera, deteriorating the rate of face recognition. Thus, in order to solve this problem, this paper detected a facial area by using a camera, removed unnecessary areas using the facial feature area calculated via a Gabor filter, and normalized the size of the facial area. Facial feature were extracted through LDA using the normalized facial image and were learned through the artificial neural network for face recognition. As a result, it was possible to improve the rate of face recognition by approx. 13% compared to the existing face recognition method including unnecessary areas.

Emotion Recognition Using Eigenspace

  • Lee, Sang-Yun;Oh, Jae-Heung;Chung, Geun-Ho;Joo, Young-Hoon;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.111.1-111
    • /
    • 2002
  • System configuration 1. First is the image acquisition part 2. Second part is for creating the vector image and for processing the obtained facial image. This part is for finding the facial area from the skin color. To do this, we can first find the skin color area with the highest weight from eigenface that consists of eigenvector. And then, we can create the vector image of eigenface from the obtained facial area. 3. Third is recognition module portion.

  • PDF