• Title/Summary/Keyword: facial image

Search Result 823, Processing Time 0.025 seconds

Motion Pattern Detection for Dynamic Facial Expression Understanding

  • Mizoguchi, Hiroshi;Hiramatsu, Seiyo;Hiraoka, Kazuyuki;Tanaka, Masaru;Shigehara, Takaomi;Mishima, Taketoshi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1760-1763
    • /
    • 2002
  • In this paper the authors present their attempt io realize a motion pattern detector that finds specified sequence of image from input motion image. The detector is intended to be used for time-varying facial expression understanding. Needless to say, facial expression understanding by machine is crucial and enriches quality of human machine interaction. Among various facial expressions, like blinking, there must be such expressions that can not be recognized if input expression image is static. Still image of blinking can not be distinguished from sleeping. In this paper, the authors discuss implementation of their motion pattern detector and describe experiments using the detector. Experimental results confirm the feasibility of the idea behind the implemented detector.

  • PDF

Detection of Pupil using Template Matching Based on Genetic Algorithm in Facial Images (얼굴 영상에서 유전자 알고리즘 기반 형판정합을 이용한 눈동자 검출)

  • Lee, Chan-Hee;Jang, Kyung-Shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.7
    • /
    • pp.1429-1436
    • /
    • 2009
  • In this paper, we propose a robust eye detection method using template matching based on genetic algorithm in the single facial image. The previous works for detecting pupil using genetic algorithm had a problem that the detection accuracy is influnced much by the initial population for it's random value. Therefore, their detection result is not consistent. In order to overcome this point we extract local minima in the facial image and generate initial populations using ones that have high fitness with a template. Each chromosome consists of geometrical informations for the template image. Eye position is detected by template matching. Experiment results verify that the proposed eye detection method improve the precision rate and high accuracy in the single facial image.

Feature Extraction Based on GRFs for Facial Expression Recognition

  • Yoon, Myoong-Young
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.3
    • /
    • pp.23-31
    • /
    • 2002
  • In this paper we propose a new feature vector for recognition of the facial expression based on Gibbs distributions which are well suited for representing the spatial continuity. The extracted feature vectors are invariant under translation rotation, and scale of an facial expression imege. The Algorithm for recognition of a facial expression contains two parts: the extraction of feature vector and the recognition process. The extraction of feature vector are comprised of modified 2-D conditional moments based on estimated Gibbs distribution for an facial image. In the facial expression recognition phase, we use discrete left-right HMM which is widely used in pattern recognition. In order to evaluate the performance of the proposed scheme, experiments for recognition of four universal expression (anger, fear, happiness, surprise) was conducted with facial image sequences on Workstation. Experiment results reveal that the proposed scheme has high recognition rate over 95%.

  • PDF

The Facial Area Extraction Using Multi-Channel Skin Color Model and The Facial Recognition Using Efficient Feature Vectors (Multi-Channel 피부색 모델을 이용한 얼굴영역추출과 효율적인 특징벡터를 이용한 얼굴 인식)

  • Choi Gwang-Mi;Kim Hyeong-Gyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.7
    • /
    • pp.1513-1517
    • /
    • 2005
  • In this paper, I make use of a Multi-Channel skin color model with Hue, Cb, Cg using Red, Blue, Green channel altogether which remove bight component as being consider the characteristics of skin color to do modeling more effective to a facial skin color for extracting a facial area. 1 used efficient HOLA(Higher order local autocorrelation function) using 26 feature vectors to obtain both feature vectors of a facial area and the edge image extraction using Harr wavelet in image which split a facial area. Calculated feature vectors are used of date for the facial recognition through learning of neural network It demonstrate improvement in both the recognition rate and speed by proposed algorithm through simulation.

A facial expressions recognition algorithm using image area segmentation and face element (영역 분할과 판단 요소를 이용한 표정 인식 알고리즘)

  • Lee, Gye-Jeong;Jeong, Ji-Yong;Hwang, Bo-Hyun;Choi, Myung-Ryul
    • Journal of Digital Convergence
    • /
    • v.12 no.12
    • /
    • pp.243-248
    • /
    • 2014
  • In this paper, we propose a method to recognize the facial expressions by selecting face elements and finding its status. The face elements are selected by using image area segmentation method and the facial expression is decided by using the normal distribution of the change rate of the face elements. In order to recognize the proper facial expression, we have built database of facial expressions of 90 people and propose a method to decide one of the four expressions (happy, anger, stress, and sad). The proposed method has been simulated and verified by face element detection rate and facial expressions recognition rate.

Multiscale Adaptive Local Directional Texture Pattern for Facial Expression Recognition

  • Zhang, Zhengyan;Yan, Jingjie;Lu, Guanming;Li, Haibo;Sun, Ning;Ge, Qi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4549-4566
    • /
    • 2017
  • This work presents a novel facial descriptor, which is named as multiscale adaptive local directional texture pattern (MALDTP) and employed for expression recognition. We apply an adaptive threshold value to encode facial image in different scales, and concatenate a series of histograms based on the MALDTP to generate facial descriptor in term of Gabor filters. In addition, some dedicated experiments were conducted to evaluate the performance of the MALDTP method in a person-independent way. The experimental results demonstrate that our proposed method achieves higher recognition rate than local directional texture pattern (LDTP). Moreover, the MALDTP method has lower computational complexity, fewer storage space and higher classification accuracy than local Gabor binary pattern histogram sequence (LGBPHS) method. In a nutshell, the proposed MALDTP method can not only avoid choosing the threshold by experience but also contain much more structural and contrast information of facial image than LDTP.

The Accuracy of Recognizing Emotion From Korean Standard Facial Expression (한국인 표준 얼굴 표정 이미지의 감성 인식 정확률)

  • Lee, Woo-Ri;Whang, Min-Cheol
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.9
    • /
    • pp.476-483
    • /
    • 2014
  • The purpose of this study was to make a suitable images for korean emotional expressions. KSFI(Korean Standard Facial Image)-AUs was produced from korean standard apperance and FACS(Facial Action coding system)-AUs. For the objectivity of KSFI, the survey was examined about emotion recognition rate and contribution of emotion recognition in facial elements from six-basic emotional expression images(sadness, happiness, disgust, fear, anger and surprise). As a result of the experiment, the images of happiness, surprise, sadness and anger which had shown higher accuracy. Also, emotional recognition rate was mainly decided by the facial element of eyes and a mouth. Through the result of this study, KSFI contents which could be combined AU images was proposed. In this future, KSFI would be helpful contents to improve emotion recognition rate.

Design of Facial Image Data Collection System for Heart Rate Measurement (심박수 측정을 위한 안면 얼굴 영상 데이터 수집 시스템 설계)

  • Jang, Seung-Ju
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.7
    • /
    • pp.971-976
    • /
    • 2021
  • In this paper, we design a facial facial image data collection system for heart rate measurement using a web camera. The design content of this paper is a function of collecting user face image information using a web camera and measuring heart rate using the user's face image information. There is a possibility that an error may occur due to non-contact heart rate measurement using a web camera. Therefore, in this paper, it is to be used for correcting heart rate program errors through classification of data in cases of error and normal. The data in case of error can be used for the purpose of reducing the error. Experiments were conducted on the proposed ideas and designed in this paper. As a result of the experiment, it was confirmed that it operates normally.

Comparison of 64 Channel 3 Dimensional Volume CT with Conventional 3D CT in the Diagnosis and Treatment of Facial Bone Fractures (얼굴뼈 골절의 진단과 치료에 64채널 3D VCT와 Conventional 3D CT의 비교)

  • Jung, Jong Myung;Kim, Jong Whan;Hong, In Pyo;Choi, Chi Hoon
    • Archives of Plastic Surgery
    • /
    • v.34 no.5
    • /
    • pp.605-610
    • /
    • 2007
  • Purpose: Facial trauma is increasing along with increasing popularity in sports, and increasing exposure to crimes or traffic accidents. Compared to the 3D CT of 1990s, the latest CT has made significant improvement thus resulting in higher accuracy of diagnosis. The objective of this study is to compare 64 channel 3 dimensional volume CT(3D VCT) with conventional 3D CT in the diagnosis and treatment of facial bone fractures. Methods: 45 patients with facial trauma were examined by 3D VCT from Jan. 2006 to Feb. 2007. 64 channel 3D VCT which consists of 64 detectors produce axial images of 0.625 mm slice and it scans 175 mm per second. These images are transformed into 3 dimensional image using software Rapidia 2.8. The axial image is reconstructed into 3 dimensional image by volume rendering method. The image is also reconstructed into coronal or sagittal image by multiplanar reformatting method. Results: Contrasting to the previous 3D CT which formulates 3D images by taking axial images of 1-2 mm, 64 channel 3D VCT takes 0.625 mm thin axial images to obtain full images without definite step ladder appearance. 64 channel 3D VCT is effective in diagnosis of thin linear bone fracture, depth and degree of fracture deviation. Conclusion: In its expense and speed, 3D VCT is superior to conventional 3D CT. Owing to its ability to reconstruct full images regardless of the direction using 2 times higher resolution power and 4 times higher speed of the previous 3D CT, 3D VCT allows for accurate evaluation of the exact site and deviation of fine fractures.

Development of an intelligent camera for multiple body temperature detection (다중 체온 감지용 지능형 카메라 개발)

  • Lee, Su-In;Kim, Yun-Su;Seok, Jong-Won
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.430-436
    • /
    • 2022
  • In this paper, we propose an intelligent camera for multiple body temperature detection. The proposed camera is composed of optical(4056*3040) and thermal(640*480), which detects abnormal symptoms by analyzing a person's facial expression and body temperature from the acquired image. The optical and thermal imaging cameras are operated simultaneously and detect an object in the optical image, in which the facial region and expression analysis are calculated from the object. Additionally, the calculated coordinate values from the optical image facial region are applied to the thermal image, also the maximum temperature is measured from the region and displayed on the screen. Abnormal symptom detection is determined by using the analyzed three facial expressions(neutral, happy, sadness) and body temperature values. In order to evaluate the performance of the proposed camera, the optical image processing part is tested on Caltech, WIDER FACE, and CK+ datasets for three algorithms(object detection, facial region detection, and expression analysis). Experimental results have shown 91%, 91%, and 84% accuracy scores each.