• Title/Summary/Keyword: Facial Image Processing

Search Result 158, Processing Time 0.025 seconds

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • v.27 no.1
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.

Improvement of Nottingham Grading System for Facial Asymmetry Evaluation (안면비대칭 평가를 위한 Nottingham Grading System의 문제점 개선)

  • Lee, Min-Woo;Jang, Min;Kim, Jina;Shin, Sang-Hoon
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.11 no.2
    • /
    • pp.179-186
    • /
    • 2017
  • Because facial asymmetry is caused by various causes, the cause analysis is important and quantitative index is needed to the evaluation. In this study, we applied the Nottingham Grading System that was used as a quantitative index to evaluate the facial paralysis by tracking the markers through the image processing and calculating the distance between the markers with images obtained by using the webcam, to evaluate facial asymmetry. The existing Nottingham Grading System has a problem of causing a measurement error in the specific case because the left and right are compared by summing the distance change between the feature points of the face part according to the change of the facial expression. We compared the case of the facial asymmetry and case of normal subject by using the existing Nottingham Grading System and the improved Nottingham grading system. In the existing Nottingham Grading System, case of facial asymmetry and case of facial symmetry were 99.0% and 95.0% respectively in the normal range, but the improved Nottingham Grading System showed facial asymmetry case was 74.0% and facial symmetrical case was 93.2%. The results of experiment show that the improved Nottingham Grading System allows detailed evaluation of each site and improved the problem of the Nottingham Grading System for specific cases.

Gaze Detection by Computing Facial Rotation and Translation (얼굴의 회전 및 이동 분석에 의한 응시 위치 파악)

  • Lee, Jeong-Jun;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.5
    • /
    • pp.535-543
    • /
    • 2002
  • In this paper, we propose a new gaze detection method using 2-D facial images captured by a camera on top of the monitor. We consider only the facial rotation and translation and not the eyes' movements. The proposed method computes the gaze point caused by the facial rotation and the amount of the facial translation respectively, and by combining these two the final gaze point on a monitor screen can be obtained. We detected the gaze point caused by the facial rotation by using a neural network(a multi-layered perceptron) whose inputs are the 2-D geometric changes of the facial features' points and estimated the amount of the facial translation by image processing algorithms in real time. Experimental results show that the gaze point detection accuracy between the computed positions and the real ones is about 2.11 inches in RMS error when the distance between the user and a 19-inch monitor is about 50~70cm. The processing time is about 0.7 second with a Pentium PC(233MHz) and 320${\times}$240 pixel-size images.

A study of Beauty Make-up Using Computer Graphics (뷰티 메이크업을 위한 컴퓨터 그래픽스 활용에 관한 연구)

  • Kwon, Hyun-Ah
    • Fashion & Textile Research Journal
    • /
    • v.8 no.2
    • /
    • pp.214-224
    • /
    • 2006
  • Computer graphics mean a reproduction of various information through image processing or the technology that is widely thought in makeup areas also, recently. Especially, Adobe Illustrator, unlike Adobe Photoshop used in image editing and correction, is software suitable for image drawing and reproducing beauty makeup. Beauty makeup is a work adorning human body to fit to the aesthetic standard in that period, and is a plastic art expressing shapes, colors and textures using design elements. Adobe Illustrator is a 2D graphics designing images using shapes having plane colors. In this study I studied techniques reproducing each element of contours and colors of beauty makeup through Adobe Illustrator CS. In other words, I have prepared a reference data by studying techniques reproducing beauty makeup using Adobe Illustrator CS. In this study, though, I couldn't try various things, because I limited facial contour and skin color to only one kind. Therefore, I hope, in the future, others can expand the ethnics of using Adobe Illustrator in more enriched ways by working on various facial contours and skin colors.

Analysis of Facial Movement According to Opposite Emotions (상반된 감성에 따른 안면 움직임 차이에 대한 분석)

  • Lee, Eui Chul;Kim, Yoon-Kyoung;Bea, Min-Kyoung;Kim, Han-Sol
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.10
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, a study on facial movements are analyzed in terms of opposite emotion stimuli by image processing of Kinect facial image. To induce two opposite emotion pairs such as "Sad - Excitement"and "Contentment - Angry" which are oppositely positioned onto Russell's 2D emotion model, both visual and auditory stimuli are given to subjects. Firstly, 31 main points are chosen among 121 facial feature points of active appearance model obtained from Kinect Face Tracking SDK. Then, pixel changes around 31 main points are analyzed. In here, local minimum shift matching method is used in order to solve a problem of non-linear facial movement. At results, right and left side facial movements were occurred in cases of "Sad" and "Excitement" emotions, respectively. Left side facial movement was comparatively more occurred in case of "Contentment" emotion. In contrast, both left and right side movements were occurred in case of "Angry" emotion.

Facial Contour Extraction in PC Camera Images using Active Contour Models (동적 윤곽선 모델을 이용한 PC 카메라 영상에서의 얼굴 윤곽선 추출)

  • Kim Young-Won;Jun Byung-Hwan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.633-638
    • /
    • 2005
  • The extraction of a face is a very important part for human interface, biometrics and security. In this paper, we applies DCM(Dilation of Color and Motion) filter and Active Contour Models to extract facial outline. First, DCM filter is made by applying morphology dilation to the combination of facial color image and differential image applied by dilation previously. This filter is used to remove complex background and to detect facial outline. Because Active Contour Models receive a large effect according to initial curves, we calculate rotational degree using geometric ratio of face, eyes and mouth. We use edgeness and intensity as an image energy, in order to extract outline in the area of weak edge. We acquire various head-pose images with both eyes from five persons in inner space with complex background. As an experimental result with total 125 images gathered by 25 per person, it shows that average extraction rate of facial outline is 98.1% and average processing time is 0.2sec.

  • PDF

CREATING JOYFUL DIGESTS BY EXPLOITING SMILE/LAUGHTER FACIAL EXPRESSIONS PRESENT IN VIDEO

  • Kowalik, Uwe;Hidaka, Kota;Irie, Go;Kojima, Akira
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.267-272
    • /
    • 2009
  • Video digests provide an effective way of confirming a video content rapidly due to their very compact form. By watching a digest, users can easily check whether a specific content is worth seeing in full. The impression created by the digest greatly influences the user's choice in selecting video contents. We propose a novel method of automatic digest creation that evokes a joyful impression through the created digest by exploiting smile/laughter facial expressions as emotional cues of joy from video. We assume that a digest presenting smiling/laughing faces appeals to the user since he/she is assured that the smile/laughter expression is caused by joyful events inside the video. For detecting smile/laughter faces we have developed a neural network based method for classifying facial expressions. Video segmentation is performed by automatic shot detection. For creating joyful digests, appropriate shots are automatically selected by shot ranking based on the smile/laughter detection result. We report the results of user trials conducted for assessing the visual impression with automatically created 'joyful' digests produced by our system. The results show that users tend to prefer emotional digests containing laughter faces. This result suggests that the attractiveness of automatically created video digests can be improved by extracting emotional cues of the contents through automatic facial expression analysis as proposed in this paper.

  • PDF

A Design of Stress Measurement System using Facial and Verbal Sentiment Analysis (표정과 언어 감성 분석을 통한 스트레스 측정시스템 설계)

  • Yuw, Suhwa;Chun, Jiwon;Lee, Aejin;Kim, Yoonhee
    • KNOM Review
    • /
    • v.24 no.2
    • /
    • pp.35-47
    • /
    • 2021
  • Various stress exists in a modern society, which requires constant competition and improvement. A person under stress often shows his pressure in his facial expression and language. Therefore, it is possible to measure the pressure using facial expression and language analysis. The paper proposes a stress measurement system using facial expression and language sensitivity analysis. The method analyzes the person's facial expression and language sensibility to derive the stress index based on the main emotional value and derives the integrated stress index based on the consistency of facial expression and language. The quantification and generalization of stress measurement enables many researchers to evaluate the stress index objectively in general.

Extraction of Facial Feature Parameters by Pixel Labeling (화소 라벨링에 의한 얼굴 특징 인수 추출)

  • 김승업;이우범;김욱현;강병욱
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.2
    • /
    • pp.47-54
    • /
    • 2001
  • The main purpose of this study is to propose the algorithm about the extraction of the facial feature. To achieve the above goal, first of all, this study produces binary image for input color image. It calculates area after pixel labeling by variant block-units. Secondly, by contour following, circumference have been calculated. So the proper degree of resemblance about area, circumference, the proper degree of a circle and shape have been calculated using the value of area and circumference. And Third, the algorithm about the methods of extracting parameters which are about the feature of eyes, nose, and mouse using the proper degree of resemblance, general structures and characteristics(symmetrical distance) in face have been accomplished. And then the feature parameters of the front face have been extracted. In this study, twelve facial feature parameters have been extracted by 297 test images taken from 100 people, and 92.93 % of the extracting rate has been shown.

  • PDF

A Local Feature-Based Robust Approach for Facial Expression Recognition from Depth Video

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1390-1403
    • /
    • 2016
  • Facial expression recognition (FER) plays a very significant role in computer vision, pattern recognition, and image processing applications such as human computer interaction as it provides sufficient information about emotions of people. For video-based facial expression recognition, depth cameras can be better candidates over RGB cameras as a person's face cannot be easily recognized from distance-based depth videos hence depth cameras also resolve some privacy issues that can arise using RGB faces. A good FER system is very much reliant on the extraction of robust features as well as recognition engine. In this work, an efficient novel approach is proposed to recognize some facial expressions from time-sequential depth videos. First of all, efficient Local Binary Pattern (LBP) features are obtained from the time-sequential depth faces that are further classified by Generalized Discriminant Analysis (GDA) to make the features more robust and finally, the LBP-GDA features are fed into Hidden Markov Models (HMMs) to train and recognize different facial expressions successfully. The depth information-based proposed facial expression recognition approach is compared to the conventional approaches such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA) where the proposed one outperforms others by obtaining better recognition rates.