• Title/Summary/Keyword: Face expression

Search Result 454, Processing Time 0.028 seconds

Performance Analysis of Viola & Jones Face Detection Algorithm (Viola & Jones 얼굴 검출 알고리즘의 성능 분석)

  • Oh, Jeong-su;Heo, Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.477-480
    • /
    • 2018
  • Viola and Jones object detection algorithm is a representative face detection algorithm. The algorithm uses Haar-like features for face expression and uses a cascade-Adaboost algorithm consisting of strong classifiers, a linear combination of weak classifiers for classification. This algorithm requires several parameter settings for its implementation and the set values affect its performance. This paper analyzes face detection performance according to the parameters set in the algorithm.

  • PDF

A study of face detection using color component (색상요소를 고려한 얼굴검출에 대한 연구)

  • 이정하;강진석;최연성;김장형
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.11a
    • /
    • pp.240-243
    • /
    • 2002
  • In this paper, we propose a face region detection based on skin-color distribution and facial feature extraction algorithm in color still images. To extract face region, we transform color using general skin-color distribution. Facial features are extracted by edge transformation. This detection process reduces calculation time by a scale-down scanning from segmented region. we can detect face region in various facial Expression, skin-color deference and tilted face images.

  • PDF

Representation of Dynamic Facial ImageGraphic for Multi-Dimensional (다차원 데이터의 동적 얼굴 이미지그래픽 표현)

  • 최철재;최진식;조규천;차홍준
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.10
    • /
    • pp.1291-1300
    • /
    • 2001
  • This article come to study the visualization representation technique of eye brain of person, basing on the ground of the dynamic graphics which is able to change the real time, manipulating the image as graphic factors of the multi-data. And the important thought in such realization is as follows ; corresponding the character points of human face and the parameter control value which obtains basing on the existing image recognition algorithm to the multi-dimensional data, synthesizing the image, it is to create the virtual image from the emotional expression according to the changing contraction expression. The proposed DyFIG system is realized that it as the completing module and we suggest the module of human face graphics which is able to express the emotional expression by manipulating and experimenting, resulting in realizing the emotional data expression description and technology.

  • PDF

Image-based Realistic Facial Expression Animation

  • Yang, Hyun-S.;Han, Tae-Woo;Lee, Ju-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.133-140
    • /
    • 1999
  • In this paper, we propose a method of image-based three-dimensional modeling for realistic facial expression. In the proposed method, real human facial images are used to deform a generic three-dimensional mesh model and the deformed model is animated to generate facial expression animation. First, we take several pictures of the same person from several view angles. Then we project a three-dimensional face model onto the plane of each facial image and match the projected model with each image. The results are combined to generate a deformed three-dimensional model. We use the feature-based image metamorphosis to match the projected models with images. We then create a synthetic image from the two-dimensional images of a specific person's face. This synthetic image is texture-mapped to the cylindrical projection of the three-dimensional model. We also propose a muscle-based animation technique to generate realistic facial expression animations. This method facilitates the control of the animation. lastly, we show the animation results of the six represenative facial expressions.

A Study of Improving LDP Code Using Edge Directional Information (에지 방향 정보를 이용한 LDP 코드 개선에 관한 연구)

  • Lee, Tae Hwan;Cho, Young Tak;Ahn, Yong Hak;Chae, Ok Sam
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.7
    • /
    • pp.86-92
    • /
    • 2015
  • This study proposes new LDP code to improve facial expression recognition rate by including local directional number(LDN), edge magnitudes and differences of neighborhood edge intensity. LDP is less sensitive on the change of intensity and stronger about noise than LBP. But LDP is difficult to express the smooth area without changing of intensity and if background image has the similar pattern with a face, the facial expression recognition rate of LDP is low. Therefore, we make the LDP code has the local directional number and the edge strength and experiment the facial expression recognition rate of changed LDP code.

Conveying Emotions Through CMC: A Comparative Study of Memoji, Emoji, and Human Face

  • Eojin Kim;Yunsun Alice Hong;Kwanghee Han
    • Science of Emotion and Sensibility
    • /
    • v.26 no.4
    • /
    • pp.93-102
    • /
    • 2023
  • Emojis and avatars are widely used in online communications, but their emotional conveyance lacks research. This study aims to contribute to the field of emotional expression in computer-mediated communication (CMC) by exploring the effectiveness of emotion recognition, the intensity of perceived emotions, and the perceived preferences for emojis and avatars as emotional expression tools. The following were used as stimuli: 12 photographs from the Yonsei-Face database, 12 Memojis that reflected the photographs, and 6 iOS emojis. The results of this study indicate that emojis outperformed other forms of emotional expression in terms of conveying emotions, intensity, and preference. Indeed, the study findings confirm that emojis remain the dominant form of emotional signals in CMC. In contrast, the study revealed that Memojis were inadequate as an expressive emotional cue. Participants did not perceive Memojis to effectively convey emotions compared with other forms of expression, such as emojis or real human faces. This suggests room for improvement in the design and implementation of Memojis to enhance their effectiveness in accurately conveying intended emotions. Addressing the limitations of Memojis and exploring ways to optimize their emotional expressiveness necessitate further research and development in avatar design.

Functions and Driving Mechanisms for Face Robot Buddy (얼굴로봇 Buddy의 기능 및 구동 메커니즘)

  • Oh, Kyung-Geune;Jang, Myong-Soo;Kim, Seung-Jong;Park, Shin-Suk
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.270-277
    • /
    • 2008
  • The development of a face robot basically targets very natural human-robot interaction (HRI), especially emotional interaction. So does a face robot introduced in this paper, named Buddy. Since Buddy was developed for a mobile service robot, it doesn't have a living-being like face such as human's or animal's, but a typically robot-like face with hard skin, which maybe suitable for mass production. Besides, its structure and mechanism should be simple and its production cost also should be low enough. This paper introduces the mechanisms and functions of mobile face robot named Buddy which can take on natural and precise facial expressions and make dynamic gestures driven by one laptop PC. Buddy also can perform lip-sync, eye-contact, face-tracking for lifelike interaction. By adopting a customized emotional reaction decision model, Buddy can create own personality, emotion and motive using various sensor data input. Based on this model, Buddy can interact probably with users and perform real-time learning using personality factors. The interaction performance of Buddy is successfully demonstrated by experiments and simulations.

  • PDF

2D Face Image Recognition and Authentication Based on Data Fusion (데이터 퓨전을 이용한 얼굴영상 인식 및 인증에 관한 연구)

  • 박성원;권지웅;최진영
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.4
    • /
    • pp.302-306
    • /
    • 2001
  • Because face Images have many variations(expression, illumination, orientation of face, etc), there has been no popular method which has high recognition rate. To solve this difficulty, data fusion that fuses various information has been studied. But previous research for data fusion fused additional biological informationUingerplint, voice, del with face image. In this paper, cooperative results from several face image recognition modules are fused without using additional biological information. To fuse results from individual face image recognition modules, we use re-defined mass function based on Dempster-Shafer s fusion theory.Experimental results from fusing several face recognition modules are presented, to show that proposed fusion model has better performance than single face recognition module without using additional biological information.

  • PDF

Efficiency Improvement on Face Recognition using Gabor Tensor (가버 텐서를 이용한 얼굴인식 성능 개선)

  • Park, Kyung-Jun;Ko, Hyung-Hwa
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9C
    • /
    • pp.748-755
    • /
    • 2010
  • In this paper we propose an improved face recognition method using Gabor tensor. Gabor transform is known to be able to represent characteristic feature in face and reduced environmental influence. It may contribute to improve face recognition ratio. We attempted to combine three-dimensional tensor from Gabor transform with MPCA(Multilinear PCA) and LDA. MPCA with tensor which use various features is more effective than traditional one or two dimensional PCA. It is known to be robust to the change of face expression or light. Proposed method is simulated by MATALB9 using ORL and Yale face database. Test result shows that recognition ratio is improved maximum 9~27% compared with exisisting face recognition method.

A study of hybrid neural network to improve performance of face recognition (얼굴 인식의 성능 향상을 위한 혼합형 신경회로망 연구)

  • Chung, Sung-Boo;Kim, Joo-Woong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.12
    • /
    • pp.2622-2627
    • /
    • 2010
  • The accuracy of face recognition used unmanned security system is very important and necessary. However, face recognition is a lot of restriction due to the change of distortion of face image, illumination, face size, face expression, round image. We propose a hybrid neural network for improve the performance of the face recognition. The proposed method is consisted of SOM and LVQ. In order to verify usefulness of the proposed method, we make a comparison between eigenface method, hidden Markov model method, multi-layer neural network.