• Title/Summary/Keyword: Facial Region

Search Result 519, Processing Time 0.038 seconds

Emotion Recognition and Expression System of Robot Based on 2D Facial Image (2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템)

  • Lee, Dong-Hoon;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

METHODS OF EYEBROW REGION EXTRACRION AND MOUTH DETECTION FOR FACIAL CARICATURING SYSTEM PICASSO-2 EXHIBITED AT EXPO2005

  • Tokuda, Naoya;Fujiwara, Takayuki;Funahashi, Takuma;Koshimizu, Hiroyasu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.425-428
    • /
    • 2009
  • We have researched and developed the caricature generation system PICASSO. PICASSO outputs the deformed facial caricature by comparing input face with prepared mean face. We specialized it as PICASSO-2 for exhibiting a robot at Aichi EXPO2005. This robot enforced by PICASSO-2 drew a facial caricature on the shrimp rice cracker with the laser pen. We have been recently exhibiting another revised robot characterized by a brush drawing. This system takes a couple of facial images with CCD camera, extracts the facial features from the images, and generates the facial caricature in real time. We experimentally evaluated the performance of the caricatures using a lot of data taken in Aichi EXPO2005. As a result it was obvious that this system were not sufficient in accuracy of eyebrow region extraction and mouth detection. In this paper, we propose the improved methods for eyebrow region extraction and mouth detection.

  • PDF

Facial Feature Extraction using Genetic Algorithm from Original Image (배경영상에서 유전자 알고리즘을 이용한 얼굴의 각 부위 추출)

  • 이형우;이상진;박석일;민홍기;홍승홍
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.214-217
    • /
    • 2000
  • Many researches have been performed for human recognition and coding schemes recently. For this situation, we propose an automatic facial feature extraction algorithm. There are two main steps: the face region evaluation from original background image such as office, and the facial feature extraction from the evaluated face region. In the face evaluation, Genetic Algorithm is adopted to search face region in background easily such as office and household in the first step, and Template Matching Method is used to extract the facial feature in the second step. We can extract facial feature more fast and exact by using over the proposed Algorithm.

  • PDF

A Study on Detecting Glasses in Facial Image

  • Jung, Sung-Gi;Paik, Doo-Won;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.12
    • /
    • pp.21-28
    • /
    • 2015
  • In this paper, we propose a method of glasses detection in facial image. we develop a detection method of glasses with a weighted sum of the results that detected by facial element detection and glasses frame candidate region. Component of the face detection method detects the glasses, by defining the detection probability of the glasses according to the detection of a face component. Method using the candidate region of the glasses frame detects the glasses, by defining feature of the glasses frame in the candidate region. finally, The results of the combined weight of both methods are obtained. The proposed method in this paper is expected to increase security system's recognition on facial accessories by raising detection performance of glasses or sunglasses for using ATM.

Risk Situation Recognition Using Facial Expression Recognition of Fear and Surprise Expression (공포와 놀람 표정인식을 이용한 위험상황 인지)

  • Kwak, Nae-Jong;Song, Teuk Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.3
    • /
    • pp.523-528
    • /
    • 2015
  • This paper proposes an algorithm for risk situation recognition using facial expression. The proposed method recognitions the surprise and fear expression among human's various emotional expression for recognizing risk situation. The proposed method firstly extracts the facial region from input, detects eye region and lip region from the extracted face. And then, the method applies Uniform LBP to each region, discriminates facial expression, and recognizes risk situation. The proposed method is evaluated for Cohn-Kanade database image to recognize facial expression. The DB has 6 kinds of facial expressions of human being that are basic facial expressions such as smile, sadness, surprise, anger, disgust, and fear expression. The proposed method produces good results of facial expression and discriminates risk situation well.

Learning Directional LBP Features and Discriminative Feature Regions for Facial Expression Recognition (얼굴 표정 인식을 위한 방향성 LBP 특징과 분별 영역 학습)

  • Kang, Hyunwoo;Lim, Kil-Taek;Won, Chulho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.5
    • /
    • pp.748-757
    • /
    • 2017
  • In order to recognize the facial expressions, good features that can express the facial expressions are essential. It is also essential to find the characteristic areas where facial expressions appear discriminatively. In this study, we propose a directional LBP feature for facial expression recognition and a method of finding directional LBP operation and feature region for facial expression classification. The proposed directional LBP features to characterize facial fine micro-patterns are defined by LBP operation factors (direction and size of operation mask) and feature regions through AdaBoost learning. The facial expression classifier is implemented as a SVM classifier based on learned discriminant region and directional LBP operation factors. In order to verify the validity of the proposed method, facial expression recognition performance was measured in terms of accuracy, sensitivity, and specificity. Experimental results show that the proposed directional LBP and its learning method are useful for facial expression recognition.

Use of the facial dismasking flap approach for surgical treatment of a multifocal craniofacial abscess

  • Ishii, Yoshitaka;Yano, Tomoyuki;Ito, Osamu
    • Archives of Plastic Surgery
    • /
    • v.45 no.3
    • /
    • pp.271-274
    • /
    • 2018
  • The decision of which surgical approach to use for the treatment of a multifocal craniofacial abscess is still a controversial matter. A failure to control disease progress in the craniofacial region can potentially put the patient's life at risk. Therefore, understanding the various ways to approach the craniofacial region helps surgeons to obtain satisfactory results in such cases. In this report, we describe a patient who visited the emergency department with a large swelling in his right cheek. A blood test and computed tomography revealed odontogenic maxillary sinusitis. The patient developed sepsis due to a progressive multifocal abscess. An abscess was seen in the temporal muscle, infratemporal fossa, and interorbital region. To control this multifocal abscess, we used the facial dismasking flap (FDF) approach. After debridement using the FDF approach, we succeeded in obtaining sufficient drainage of the abscess, and the patient recovered from sepsis. The advantages of the FDF approach are that it provides a wide surgical field, extending from the parietal region to the mid-facial region, and that it leaves no aesthetically displeasing scars on the face. The FDF approach may be one of the best options to approach multifocal abscesses in the craniofacial region.

Clinical Studies on Obesity and Right-left of Patients with Bell's palsy (구안와사(口眼喎斜)의 비수(肥瘦)와 좌우(左右)에 관한 임상적 고찰)

  • Choi, Kyu-Ho;Lee, Youn-Kyu;Lee, Jae-Guen;Son, Ji-Young;Lee, Yeon-Kyeong;Kang, Seok-Bong;Shin, Hyeon-Cheol
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.21 no.6
    • /
    • pp.1619-1623
    • /
    • 2007
  • This study was desiged to investigate the Obesity and Right-left(region) of Patients with Bell's palsy. We measured the sex, age, BMI and pulse diagnosis of 149 patients who were diagnosed as Bell's palsy. The results were as follows : In distribution of sex, the ratio of male was 52.35%(78 cases), female 47.65%(71 cases). The distribution of age revealed that 40s was the most in 50 cases(33.6%). The distribution of region in facial palsy was left 73 cases, right 76 cases(1:1.04). In distribution of region in facial palsy patients with obesity, the ratio of left was 32.86%(49 cases), right 34.23%(51 cases). But facial palsy patients with obesity was the most in 100 cases(67.11%), low weght was 3 cases(2.01%). In distribution of pulse diagnosis in facial palsy patients with obesity, the ratio of huh-mac(虛脈) was 63.64%(42 case), sil-mac(實脈) 36.36%(24 cases). The huh-mac(虛脈) was simlliar to gi-huh(氣虛). So we found that the facial palsy patients with obesity was more gi-huh(氣虛) than with low weght. In distribution of region in facial palsy patients with obesity-huh-mac(虛脈), the ratio of left was 41.38%(12 cases), right 58.62%(17 cases).

Face Detection using Orientation(In-Plane Rotation) Invariant Facial Region Segmentation and Local Binary Patterns(LBP) (방향 회전에 불변한 얼굴 영역 분할과 LBP를 이용한 얼굴 검출)

  • Lee, Hee-Jae;Kim, Ha-Young;Lee, David;Lee, Sang-Goog
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.692-702
    • /
    • 2017
  • Face detection using the LBP based feature descriptor has issues in that it can not represent spatial information between facial shape and facial components such as eyes, nose and mouth. To address these issues, in previous research, a facial image was divided into a number of square sub-regions. However, since the sub-regions are divided into different numbers and sizes, the division criteria of the sub-region suitable for the database used in the experiment is ambiguous, the dimension of the LBP histogram increases in proportion to the number of sub-regions and as the number of sub-regions increases, the sensitivity to facial orientation rotation increases significantly. In this paper, we present a novel facial region segmentation method that can solve in-plane rotation issues associated with LBP based feature descriptors and the number of dimensions of feature descriptors. As a result, the proposed method showed detection accuracy of 99.0278% from a single facial image rotated in orientation.

Facial Feature Localization from 3D Face Image using Adjacent Depth Differences (인접 부위의 깊이 차를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.617-624
    • /
    • 2004
  • This paper describes a new facial feature localization method that uses Adjacent Depth Differences(ADD) in 3D facial surface. In general, human recognize the extent of deepness or shallowness of region relatively, in depth, by comparing the neighboring depth information among regions of an object. The larger the depth difference between regions shows, the easier one can recognize each region. Using this principal, facial feature extraction will be easier, more reliable and speedy. 3D range images are used as input images. And ADD are obtained by differencing two range values, which are separated at a distance coordinate, both in horizontal and vertical directions. ADD and input image are analyzed to extract facial features, then localized a nose region, which is the most prominent feature in 3D facial surface, effectively and accurately.