• Title/Summary/Keyword: Face expression

Search Result 454, Processing Time 0.022 seconds

Player of Song by Face Recognition (표정인식에 의한 노래 플레이어)

  • Nam, Soo-Tai;Shin, Seong-Yoon;Lee, Hyun-chang;Jin, Chan-Yong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.184-185
    • /
    • 2018
  • Face Song Player, which is a system that recognizes the facial expression of an individual and plays music that is appropriate for such person, is presented. It studies information on the facial contour lines and extracts an average, and acquires the facial shape information. MUCT DB was used as the DB for learning. For the recognition of facial expression, an algorithm was designed by using the differences in the characteristics of each of the expressions on the basis of expressionless images.

  • PDF

A facial expressions recognition algorithm using image area segmentation and face element (영역 분할과 판단 요소를 이용한 표정 인식 알고리즘)

  • Lee, Gye-Jeong;Jeong, Ji-Yong;Hwang, Bo-Hyun;Choi, Myung-Ryul
    • Journal of Digital Convergence
    • /
    • v.12 no.12
    • /
    • pp.243-248
    • /
    • 2014
  • In this paper, we propose a method to recognize the facial expressions by selecting face elements and finding its status. The face elements are selected by using image area segmentation method and the facial expression is decided by using the normal distribution of the change rate of the face elements. In order to recognize the proper facial expression, we have built database of facial expressions of 90 people and propose a method to decide one of the four expressions (happy, anger, stress, and sad). The proposed method has been simulated and verified by face element detection rate and facial expressions recognition rate.

3D Facial Landmark Tracking and Facial Expression Recognition

  • Medioni, Gerard;Choi, Jongmoo;Labeau, Matthieu;Leksut, Jatuporn Toy;Meng, Lingchao
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.3
    • /
    • pp.207-215
    • /
    • 2013
  • In this paper, we address the challenging computer vision problem of obtaining a reliable facial expression analysis from a naturally interacting person. We propose a system that combines a 3D generic face model, 3D head tracking, and 2D tracker to track facial landmarks and recognize expressions. First, we extract facial landmarks from a neutral frontal face, and then we deform a 3D generic face to fit the input face. Next, we use our real-time 3D head tracking module to track a person's head in 3D and predict facial landmark positions in 2D using the projection from the updated 3D face model. Finally, we use tracked 2D landmarks to update the 3D landmarks. This integrated tracking loop enables efficient tracking of the non-rigid parts of a face in the presence of large 3D head motion. We conducted experiments for facial expression recognition using both framebased and sequence-based approaches. Our method provides a 75.9% recognition rate in 8 subjects with 7 key expressions. Our approach provides a considerable step forward toward new applications including human-computer interactions, behavioral science, robotics, and game applications.

Face Image Synthesis using Nonlinear Manifold Learning (비선형 매니폴드 학습을 이용한 얼굴 이미지 합성)

  • 조은옥;김대진;방승양
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.182-188
    • /
    • 2004
  • This paper proposes to synthesize facial images from a few parameters for the pose and the expression of their constituent components. This parameterization makes the representation, storage, and transmission of face images effective. But it is difficult to parameterize facial images because variations of face images show a complicated nonlinear manifold in high-dimensional data space. To tackle this problem, we use an LLE (Locally Linear Embedding) technique for a good representation of face images, where the relationship among face images is preserving well and the projected manifold into the reduced feature space becomes smoother and more continuous. Next, we apply a snake model to estimate face feature values in the reduced feature space that corresponds to a specific pose and/or expression parameter. Finally, a synthetic face image is obtained from an interpolation of several neighboring face images in the vicinity of the estimated feature value. Experimental results show that the proposed method shows a negligible overlapping effect and creates an accurate and consistent synthetic face images with respect to changes of pose and/or expression parameters.

Emotion Recognition of Facial Expression using the Hybrid Feature Extraction (혼합형 특징점 추출을 이용한 얼굴 표정의 감성 인식)

  • Byun, Kwang-Sub;Park, Chang-Hyun;Sim, Kwee-Bo
    • Proceedings of the KIEE Conference
    • /
    • 2004.05a
    • /
    • pp.132-134
    • /
    • 2004
  • Emotion recognition between human and human is done compositely using various features that are face, voice, gesture and etc. Among them, it is a face that emotion expression is revealed the most definitely. Human expresses and recognizes a emotion using complex and various features of the face. This paper proposes hybrid feature extraction for emotions recognition from facial expression. Hybrid feature extraction imitates emotion recognition system of human by combination of geometrical feature based extraction and color distributed histogram. That is, it can robustly perform emotion recognition by extracting many features of facial expression.

  • PDF

Facial Expression Recognition using Face Alignment and AdaBoost (얼굴정렬과 AdaBoost를 이용한 얼굴 표정 인식)

  • Jeong, Kyungjoong;Choi, Jaesik;Jang, Gil-Jin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.193-201
    • /
    • 2014
  • This paper suggests a facial expression recognition system using face detection, face alignment, facial unit extraction, and training and testing algorithms based on AdaBoost classifiers. First, we find face region by a face detector. From the results, face alignment algorithm extracts feature points. The facial units are from a subset of action units generated by combining the obtained feature points. The facial units are generally more effective for smaller-sized databases, and are able to represent the facial expressions more efficiently and reduce the computation time, and hence can be applied to real-time scenarios. Experimental results in real scenarios showed that the proposed system has an excellent performance over 90% recognition rates.

Automatic 3D Facial Movement Detection from Mirror-reflected Multi-Image for Facial Expression Modeling (거울 투영 이미지를 이용한 3D 얼굴 표정 변화 자동 검출 및 모델링)

  • Kyung, Kyu-Min;Park, Mignon;Hyun, Chang-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.113-115
    • /
    • 2005
  • This thesis presents a method for 3D modeling of facial expression from frontal and mirror-reflected multi-image. Since the proposed system uses only one camera, two mirrors, and simple mirror's property, it is robust, accurate and inexpensive. In addition, we can avoid the problem of synchronization between data among different cameras. Mirrors located near one's cheeks can reflect the side views of markers on one's face. To optimize our system, we must select feature points of face intimately associated with human's emotions. Therefore we refer to the FDP (Facial Definition Parameters) and FAP (Facial Animation Parameters) defined by MPEG-4 SNHC (Synlhetic/Natural Hybrid Coding). We put colorful dot markers on selected feature points of face to detect movement of facial deformation when subject makes variety expressions. Before computing the 3D coordinates of extracted facial feature points, we properly grouped these points according to relative part. This makes our matching process automatically. We experiment on about twenty koreans the subject of our experiment in their late twenties and early thirties. Finally, we verify the performance of the proposed method tv simulating an animation of 3D facial expression.

  • PDF

Emotional Interface Technologies for Service Robot (서비스 로봇을 위한 감성인터페이스 기술)

  • Yang, Hyun-Seung;Seo, Yong-Ho;Jeong, Il-Woong;Han, Tae-Woo;Rho, Dong-Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.58-65
    • /
    • 2006
  • The emotional interface is essential technology for the robot to provide the proper service to the user. In this research, we developed emotional components for the service robot such as a neural network based facial expression recognizer, emotion expression technologies based on 3D graphical face expression and joints movements, considering a user's reaction, behavior selection technology for emotion expression. We used our humanoid robots, AMI and AMIET as the test-beds of our emotional interface. We researched on the emotional interaction between a service robot and a user by integrating the developed technologies. Emotional interface technology for the service robot, enhance the performance of friendly interaction to the service robot, to increase the diversity of the service and the value-added of the robot for human. and it elevates the market growth and also contribute to the popularization of the robot. The emotional interface technology can enhance the performance of friendly interaction of the service robot. This technology can also increase the diversity of the service and the value-added of the robot for human. and it can elevate the market growth and also contribute to the popularization of the robot.

  • PDF

Facial Expression Recognition through Self-supervised Learning for Predicting Face Image Sequence

  • Yoon, Yeo-Chan;Kim, Soo Kyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.9
    • /
    • pp.41-47
    • /
    • 2022
  • In this paper, we propose a new and simple self-supervised learning method that predicts the middle image of a face image sequence for automatic expression recognition. Automatic facial expression recognition can achieve high performance through deep learning methods, however, generally requires a expensive large data set. The size of the data set and the performance of the algorithm are tend to be proportional. The proposed method learns latent deep representation of a face through self-supervised learning using an existing dataset without constructing an additional dataset. Then it transfers the learned parameter to new facial expression reorganization model for improving the performance of automatic expression recognition. The proposed method showed high performance improvement for two datasets, CK+ and AFEW 8.0, and showed that the proposed method can achieve a great effect.

CHROMATIC SUMS OF SINGULAR MAPS ON SOME SURFACES

  • Li, Zhao-Xiang;Liu, Yan-Pei
    • Journal of applied mathematics & informatics
    • /
    • v.15 no.1_2
    • /
    • pp.159-172
    • /
    • 2004
  • A map is singular if each edge is on the same face on a surface (i.e., those have only one face on a surface). Because any map with loop is not colorable, all maps here are assumed to be loopless. In this paper po-vides the explicit expression of chromatic sum functions for rooted singular maps on the projective plane, the torus and the Klein bottle. From the explicit expression of chromatic sum functions of such maps, the explicit expression of enumerating functions of such maps are also derived.