• Title/Summary/Keyword: Facial Color Control

Search Result 31, Processing Time 0.025 seconds

Facial Color Control based on Emotion-Color Theory (정서-색채 이론에 기반한 게임 캐릭터의 동적 얼굴 색 제어)

  • Park, Kyu-Ho;Kim, Tae-Yong
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.8
    • /
    • pp.1128-1141
    • /
    • 2009
  • Graphical expressions are continuously improving, spurred by the astonishing growth of the game technology industry. Despite such improvements, users are still demanding a more natural gaming environment and true reflections of human emotions. In real life, people can read a person's moods from facial color and expression. Hence, interactive facial colors in game characters provide a deeper level of reality. In this paper we propose a facial color adaptive technique, which is a combination of an emotional model based on human emotion theory, emotional expression pattern using colors of animation contents, and emotional reaction speed function based on human personality theory, as opposed to past methods that expressed emotion through blood flow, pulse, or skin temperature. Experiments show this of expression of the Facial Color Model based on facial color adoptive technique and expression of the animation contents is effective in conveying character emotions. Moreover, the proposed Facial Color Adaptive Technique can be applied not only to 2D games, but to 3D games as well.

  • PDF

Face and Facial Feature Detection under Pose Variation of User Face for Human-Robot Interaction (인간-로봇 상호작용을 위한 자세가 변하는 사용자 얼굴검출 및 얼굴요소 위치추정)

  • Park Sung-Kee;Park Mignon;Lee Taigun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.1
    • /
    • pp.50-57
    • /
    • 2005
  • We present a simple and effective method of face and facial feature detection under pose variation of user face in complex background for the human-robot interaction. Our approach is a flexible method that can be performed in both color and gray facial image and is also feasible for detecting facial features in quasi real-time. Based on the characteristics of the intensity of neighborhood area of facial features, new directional template for facial feature is defined. From applying this template to input facial image, novel edge-like blob map (EBM) with multiple intensity strengths is constructed. Regardless of color information of input image, using this map and conditions for facial characteristics, we show that the locations of face and its features - i.e., two eyes and a mouth-can be successfully estimated. Without the information of facial area boundary, final candidate face region is determined by both obtained locations of facial features and weighted correlation values with standard facial templates. Experimental results from many color images and well-known gray level face database images authorize the usefulness of proposed algorithm.

Facial Point Classifier using Convolution Neural Network and Cascade Facial Point Detector (컨볼루셔널 신경망과 케스케이드 안면 특징점 검출기를 이용한 얼굴의 특징점 분류)

  • Yu, Je-Hun;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.241-246
    • /
    • 2016
  • Nowadays many people have an interest in facial expression and the behavior of people. These are human-robot interaction (HRI) researchers utilize digital image processing, pattern recognition and machine learning for their studies. Facial feature point detector algorithms are very important for face recognition, gaze tracking, expression, and emotion recognition. In this paper, a cascade facial feature point detector is used for finding facial feature points such as the eyes, nose and mouth. However, the detector has difficulty extracting the feature points from several images, because images have different conditions such as size, color, brightness, etc. Therefore, in this paper, we propose an algorithm using a modified cascade facial feature point detector using a convolutional neural network. The structure of the convolution neural network is based on LeNet-5 of Yann LeCun. For input data of the convolutional neural network, outputs from a cascade facial feature point detector that have color and gray images were used. The images were resized to $32{\times}32$. In addition, the gray images were made into the YUV format. The gray and color images are the basis for the convolution neural network. Then, we classified about 1,200 testing images that show subjects. This research found that the proposed method is more accurate than a cascade facial feature point detector, because the algorithm provides modified results from the cascade facial feature point detector.

Emotion Recognition Using Eigenspace

  • Lee, Sang-Yun;Oh, Jae-Heung;Chung, Geun-Ho;Joo, Young-Hoon;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.111.1-111
    • /
    • 2002
  • System configuration 1. First is the image acquisition part 2. Second part is for creating the vector image and for processing the obtained facial image. This part is for finding the facial area from the skin color. To do this, we can first find the skin color area with the highest weight from eigenface that consists of eigenvector. And then, we can create the vector image of eigenface from the obtained facial area. 3. Third is recognition module portion.

  • PDF

Treatment of Postburn Facial Hyperpigmentation with Vitamin C Iontophoresis (비타민 C 이온 영동법을 이용한 안면부 화상 후 과색소 침착의 치료)

  • Choi, Jae-Il;Lee, Ji-Won;Suhk, Jeong-Hoon;Yang, Wan-Suk
    • Archives of Plastic Surgery
    • /
    • v.38 no.6
    • /
    • pp.765-774
    • /
    • 2011
  • Purpose: Many facial burn patients suffer from hyperpigmentation and its treatment has been challenging. Vitamin C (ascorbic acid) has important physiologic effects on skin, including inhibition of melanogenesis, promotion of collagen biosynthesis, prevention of free radical formation, and acceleration on wound healing. The purpose of this study is to evaluate the effectiveness of Vitamin C iontophoresis for the treatment of postburn hyperpigmentation. Methods: The authors performed a retrospective analysis of 93 patients who were admitted for the treatment of facial burn from February 2008 through February 2010. Among them, 51 patients were treated with Vitamin C iontophoresis to control postburn hyperpigmentation and 42 patients were not. Experimental group was chosen 20 of 51 patients who had been treated with Vitamin C iontophoresis and had normal facial skin on the comparable contralateral aesthetic unit. Control group was chosen 20 of 42 patients who were not treated with Vitamin C iontophoresis and had also contralateral normal aesthetic unit. The resulting color of 20 patients who were treated with Vitamin C iontophoresis was compared with the color of the contralateral normal facial skin using a digital scale color analysis. Results were analyzed with Wilcoxon signed rank test. Results: The analysis revealed significant improvement of hyperpigmentation in the experimental group compared to control group. The difference of intial value and the value in 6 months showed significant change. Mean (${\Delta}^{initial}$-${\Delta}^{6month}$) of experimental group was 11.61 and control group was 7.23. Thus, the difference between the experimental group and the control group was 4.38. Therefore, Vitamin C iontophoresis revealed significant improvement of hyperpigmentation in the experimental group compared with control group. Conclusion: Vitamin C iontophoresis is an effective treatment modality for postburn hyperpigmentation.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

Emotion Recognition and Expression System of Robot Based on 2D Facial Image (2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템)

  • Lee, Dong-Hoon;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

  • Lee, Tai-Gun;Park, Sung-Kee;Kim, Mun-Sang;Park, Mig-Non
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.783-788
    • /
    • 2003
  • In this paper, we present a new approach to detect and recognize human face in the image from vision camera equipped on the mobile robot platform. Due to the mobility of camera platform, obtained facial image is small and pose-various. For this condition, new algorithm should cope with these constraints and can detect and recognize face in nearly real time. In detection step, ‘coarse to fine’ detection strategy is used. Firstly, region boundary including face is roughly located by dual ellipse templates of facial color and on this region, the locations of three main facial features- two eyes and mouth-are estimated. For this, simplified facial feature maps using characteristic chrominance are made out and candidate pixels are segmented as eye or mouth pixels group. These candidate facial features are verified whether the length and orientation of feature pairs are suitable for face geometry. In recognition step, pseudo-convex hull area of gray face image is defined which area includes feature triangle connecting two eyes and mouth. And random lattice line set are composed and laid on this convex hull area, and then 2D appearance of this area is represented. From these procedures, facial information of detected face is obtained and face DB images are similarly processed for each person class. Based on facial information of these areas, distance measure of match of lattice lines is calculated and face image is recognized using this measure as a classifier. This proposed detection and recognition algorithms overcome the constraints of previous approach [15], make real-time face detection and recognition possible, and guarantee the correct recognition irregardless of some pose variation of face. The usefulness at mobile robot application is demonstrated.

  • PDF

The Effects of Music Therapy on recovery of consciousness and vital signs in post operative patient in the recovery room (음악요법이 수술직후 환자의 의식회복과 활력징후에 미치는 영향)

  • Kim Sook-Jung;Jun Eun-Hee
    • Journal of Korean Academy of Fundamentals of Nursing
    • /
    • v.7 no.2
    • /
    • pp.222-238
    • /
    • 2000
  • The purpose of this study was to demonstrate the effect of music therapy as a nursing intervention on changes in recovery of consciousness and vital signs for postoperative patients in the recovery room. The subject for this study were fifty three of postoperative patients who were transferred from the OR to the RR at Kwangju Christian Hospital in Kwangju City. Thirty of them were assigned to the experimental group, and twenty three, to the control group. The age of the subject was between twenty and sixty years of age. The subject had a general anesthesia without any special complications, and they were not completely awake. The data were collected for six months from July 1999 to February 2000. The method used was to compare the condition of the subjects in each group at the beginning and at certain times repeatedly. The features observed were the level of consciousness, the frequency of complaints of pain, and vital signs of the subject before and 15 minutes, 30 minutes, and 60 minutes after hearing their favorite music for 30 minutes. The results are as follows 1. The recovery of consciousness was revealed through significant changes in facial expression, facial color, and grip strength in the experimental group more strongly than in the control group. No significant changes were shown in verbal order. The differences in recovery of consciousness in the pre-post music therapy between the two groups was not significant in verbal order, facial expression, or grip strength. However, significant changes were seen in facial color. 2. There were no significant differences between the two groups in changes in the frequency of pain complaints after music therapy. However, a significant difference was shown in the pre-post music therapy scres. 3. Vital signs did not show a significant difference between the two groups. However, the $SPO_2$ of the experimental group was significantly elevated after 60 minutes. The difference pre-post to the music therapy in the vital signs between two groups was significant only in body temperature. This study showed that the effect of music therapy given to postoperative patients is that it promotes changes in facial expression, facial color, and grip strength helping recovery of consciousness, stabilizing vital signs, elevating levels of $SPO_2$. and reducing complaints of pain. It is recommended that if the patient wants it music therapy be given right after surgery in the recovery room as a nursing intervention.

  • PDF