• Title/Summary/Keyword: Facial Avatar

Search Result 59, Processing Time 0.024 seconds

Facial Characteristic Point Extraction for Representation of Facial Expression (얼굴 표정 표현을 위한 얼굴 특징점 추출)

  • Oh, Jeong-Su;Kim, Jin-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.117-122
    • /
    • 2005
  • This paper proposes an algorithm for Facial Characteristic Point(FCP) extraction. The FCP plays an important role in expression representation for face animation, avatar mimic or facial expression recognition. Conventional algorithms extract the FCP with an expensive motion capture device or by using markers, which give an inconvenience or a psychological load to experimental person. However, the proposed algorithm solves the problems by using only image processing. For the efficient FCP extraction, we analyze and improve the conventional algorithms detecting facial components, which are basis of the FCP extraction.

3-D Facial Animation on the PDA via Automatic Facial Expression Recognition (얼굴 표정의 자동 인식을 통한 PDA 상에서의 3차원 얼굴 애니메이션)

  • Lee Don-Soo;Choi Soo-Mi;Kim Hae-Hwang;Kim Yong-Guk
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.795-802
    • /
    • 2005
  • In this paper, we present a facial expression recognition-synthesis system that recognizes 7 basic emotion information automatically and renders face with non-photorelistic style in PDA For the recognition of the facial expressions, first we need to detect the face area within the image acquired from the camera. Then, a normalization procedure is applied to it for geometrical and illumination corrections. To classify a facial expression, we have found that when Gabor wavelets is combined with enhanced Fisher model the best result comes out. In our case, the out put is the 7 emotional weighting. Such weighting information transmitted to the PDA via a mobile network, is used for non-photorealistic facial expression animation. To render a 3-D avatar which has unique facial character, we adopted the cartoon-like shading method. We found that facial expression animation using emotional curves is more effective in expressing the timing of an expression comparing to the linear interpolation method.

Development of 'Children's Food Avatar' Application for Dietary Education (식생활교육용 '어린이 푸드 아바타' 애플리케이션 개발)

  • Cho, Joo-Han;Kim, Sook-Bae;Kim, Soon-Kyung;Kim, Mi-Hyun;Kim, Gap-Soo;Kim, Se-Na;Kim, So-Young;Kim, Jeong-Weon
    • Korean Journal of Community Nutrition
    • /
    • v.18 no.4
    • /
    • pp.299-311
    • /
    • 2013
  • An educational application (App) called 'Children's Food Avatar' was developed in this study by using a food DB of nutrition and functionality from Rural Development Administration (RDA) as a smart-learning mobile device for elementary school students. This App was designed for the development of children's desirable dietary habits through an on-line activity of food choices for a meal from food DB of RDA provided as Green Water Mill guide. A customized avatar system was introduced as an element of fun and interactive animation for children which provides nutritional evaluation of selected foods by changing its appearance, facial look, and speech balloon, and consequently providing chances of correcting their food choices for balanced diet. In addition, nutrition information menu was included in the App to help children understand various nutrients, their function and healthy dietary life. When the App was applied to 54 elementary school students for a week in November, 2012, significant increases in the levels of knowledge, attitude and behavior in their diet were observed compared with those of the control group (p < 0.05, 0.01). Both elementary students and teachers showed high levels of satisfaction ranging from 4.30 to 4.89 for the App, therefore, it could be widely used for the dietary education for elementary school students as a smart-learning device.

A Comic Facial Expression Method for Intelligent Avatar Communications in the Internet Cyberspace (인터넷 가상공간에서 지적 아바타 통신을 위한 코믹한 얼굴 표정의 생성법)

  • 이용후;김상운;청목유직
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.1
    • /
    • pp.59-73
    • /
    • 2003
  • As a means of overcoming the linguistic barrier between different languages in the Internet, a new sign-language communication system with CG animation techniques has been developed and proposed. In the system, the joint angles of the arms and the hands corresponding to the gesture as a non-verbal communication tool have been considered. The emotional expression, however, could as play also an important role in communicating each other. Especially, a comic expression is more efficient than real facial expression, and the movements of the cheeks and the jaws are more important AU's than those of the eyebrow, eye, mouth etc. Therefore, in this paper, we designed a 3D emotion editor using 2D model, and we extract AU's (called as PAU, here) which play a principal function in expressing emotions. We also proposed a method of generating the universal emotional expression with Avatar models which have different vertex structures. Here, we employed a method of dynamically adjusting the AU movements according to emotional intensities. The proposed system is implemented with Visual C++ and Open Inventor on windows platforms. Experimental results show a possibility that the system could be used as a non-verbal communication means to overcome the linguistic barrier.

Dynamic Facial Expression of Fuzzy Modeling Using Probability of Emotion (감정확률을 이용한 동적 얼굴표정의 퍼지 모델링)

  • Kang, Hyo-Seok;Baek, Jae-Ho;Kim, Eun-Tai;Park, Mignon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.1-5
    • /
    • 2009
  • This paper suggests to apply mirror-reflected method based 2D emotion recognition database to 3D application. Also, it makes facial expression of fuzzy modeling using probability of emotion. Suggested facial expression function applies fuzzy theory to 3 basic movement for facial expressions. This method applies 3D application to feature vector for emotion recognition from 2D application using mirror-reflected multi-image. Thus, we can have model based on fuzzy nonlinear facial expression of a 2D model for a real model. We use average values about probability of 6 basic expressions such as happy, sad, disgust, angry, surprise and fear. Furthermore, dynimic facial expressions are made via fuzzy modelling. This paper compares and analyzes feature vectors of real model with 3D human-like avatar.

Facial Animation Generation by Korean Text Input (한글 문자 입력에 따른 얼굴 에니메이션)

  • Kim, Tae-Eun;Park, You-Shin
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.4 no.2
    • /
    • pp.116-122
    • /
    • 2009
  • In this paper, we propose a new method which generates the trajectory of the mouth shape for the characters by the user inputs. It is based on the character at a basis syllable and can be suitable to the mouth shape generation. In this paper, we understand the principle of the Korean language creation and find the similarity for the form of the mouth shape and select it as a basic syllable. We also consider the articulation of this phoneme for it and create a new mouth shape trajectory and apply at face of an 3D avatar.

  • PDF

Emotion fusion video communication services for real-time avatar matching technology (영상통신 감성융합 서비스를 위한 실시간 아바타 정합기술)

  • Oh, Dong Sik;Kang, Jun Ku;Sin, Min Ho
    • Journal of Digital Convergence
    • /
    • v.10 no.10
    • /
    • pp.283-288
    • /
    • 2012
  • 3D is the one of the current world in the spotlight as part of the future earnings of the business sector. Existing flat 2D and stereoscopic 3D to change the 3D shape and texture make walking along the dimension of the real world and the virtual reality world by making it feel contemporary reality of coexistence good show. 3D for the interest of the people has been spreading throughout the movie which is based on a 3D Avata. 3D TV market of the current conglomerate of changes in the market pioneer in the 3D market, further leap into the era of the upgrade was. At the same time, however, the modern man of the world, if becoming a necessity in the smartphone craze new innovation in the IT market mobile phone market and also has made. A small computer called a smartphone enough, the ripple velocity and the aftermath of the innovation of the telephone, the Internet as much as to leave many issues. Smartphone smart phone is a mobile phone that can be several functions. The current iPhone, Android. In addition to a large number of Windows Phone smartphones are released. Above the overall prospects of the future and a business service model for 3D facial expression as input avatar virtual 3D character on camera on your smartphone camera to recognize a user's emotional expressions on the face of the person is able to synthetic synthesized avatars in real-time to other mobile phone users matching, transmission, and be able to communicate in real-time sensibility fused video communication services to the development of applications.

The Uncanny Valley Effect for Celebrity Faces and Celebrity-based Avatars (연예인 얼굴과 연예인 기반 아바타에서의 언캐니 밸리)

  • Jung, Na-ri;Lee, Min-ji;Choi, Hoon
    • Science of Emotion and Sensibility
    • /
    • v.25 no.1
    • /
    • pp.91-102
    • /
    • 2022
  • As virtual space activities become more common, human-virtual agents such as avatars are more frequently used instead of people, but the uncanny valley effect, in which people feel uncomfortable when they see artifacts that look similar to humans, is an obstacle. In this study, we explored the uncanny valley effect for celebrity avatars. We manipulated the degree of atypicality by adjusting the eye size in photos of celebrities, ordinary people, and their avatars and measured the intensity of the uncanny valley effect. As a result, the uncanny valley effect for celebrities and celebrity avatars appeared to be stronger than the effect for ordinary people. This result is consistent with previous findings that more robust facial representations are formed for familiar faces, making it easier to detect facial changes. However, with real faces of celebrities and ordinary people, as in previous studies, the higher the degree of atypicality, the greater the uncanny valley effect, but this result was not found for the avatar stimulus. This high degree of tolerance for atypicality in avatars seems to be caused by cartoon characters' tendency to have exaggerated facial features such as eyes, nose, and mouth. These results suggest that efforts to reduce the uncanny valley in the virtual space service using celebrity avatars are necessary.

Emotional Expression System Based on Dynamic Emotion Space (동적 감성 공간에 기반한 감성 표현 시스템)

  • Sim Kwee-Bo;Byun Kwang-Sub;Park Chang-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.18-23
    • /
    • 2005
  • It is difficult to define and classify human emotion. These vague human emotion appear not in single emotion, but in combination of various emotion. And among them, a remarkable emotion is expressed. This paper proposes a emotional expression algorithm using dynamic emotion space, which give facial expression in similar with vague human emotion. While existing avatar express several predefined emotions from database, our emotion expression system can give unlimited various facial expression by expressing emotion based on dynamically changed emotion space. In order to see whether our system practically give complex and various human expression, we perform real implementation and experiment and verify the efficacy of emotional expression system based on dynamic emotion space.

A study on the avatar modelling of Korean 3D facial features in twenties (한국인 20대 안면의 3차원 형태소에 의한 아바타 모델링)

  • Lee, Mi-Seung;Kim, Chang-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.10 no.1
    • /
    • pp.29-39
    • /
    • 2004
  • 사이버상의 의사소통의 도구로 사용되는 아바타나 캐릭터의 3차원 얼굴 모델링에 대한 연구로서 한국인의 안면형태소를 지닌 모델생성으로 인터넷을 즐겨 사용하는 현대인과 청소년들에게 민족적 구체성과 국민적 정체성을 지닌 아바타의 활용에 도움이 되고자 한다. 임의의 20대 남, 녀 각각 40인으로부터 스켄을 하고 머리뼈와 근육 구조를 바탕으로 눈, 코, 눈썹, 뺨, 입, 턱, 아래턱, 이마, 귀 등 각 형태소로 나누고 참조모델을 생성한다. 임의로 생성된 안면형태소 3차원 모델이 한국인의 형상을 갖는지에 관한 평가를 정량적인 치수측정에 의해서 검증 분석하여 입증한다. 이들 안부, 비부, 구순부, 얼굴형의 각 형태소로부터 각 형태소틀 간에 보간 되어 변형된 형태의 형태소 생성이 가능하고, 이 변형 형태소들 간의 임의의 조합된 모델의 안면 생성이 가능하게 한다.

  • PDF