• Title/Summary/Keyword: Facial Avatar

Search Result 59, Processing Time 0.042 seconds

Soft Sign Language Expression Method of 3D Avatar (3D 아바타의 자연스러운 수화 동작 표현 방법)

  • Oh, Young-Joon;Jang, Hyo-Young;Jung, Jin-Woo;Park, Kwang-Hyun;Kim, Dae-Jin;Bien, Zeung-Nam
    • The KIPS Transactions:PartB
    • /
    • v.14B no.2
    • /
    • pp.107-118
    • /
    • 2007
  • This paper proposes a 3D avatar which expresses sign language in a very using lips, facial expression, complexion, pupil motion and body motion as well as hand shape, Hand posture and hand motion to overcome the limitation of conventional sign language avatars from a deaf's viewpoint. To describe motion data of hand and other body components structurally and enhance the performance of databases, we introduce the concept of a hyper sign sentence. We show the superiority of the developed system by a usability test through a questionnaire survey.

Realtime Facial Expression Control of 3D Avatar by Isomap of Motion Data (모션 데이터에 Isomap을 사용한 3차원 아바타의 실시간 표정 제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.3
    • /
    • pp.9-16
    • /
    • 2007
  • This paper describe methodology that is distributed on 2-dimensional plane to much high-dimensional facial motion datas using Isomap algorithm, and user interface techniques to control facial expressions by selecting expressions while user navigates this space in real-time. Isomap algorithm is processed of three steps as follow; first define an adjacency expression of each expression data, and second, calculate manifold distance between each expressions and composing expression spaces. These facial spaces are created by calculating of the shortest distance(manifold distance) between two random expressions. We have taken a Floyd algorithm for it. Third, materialize multi-dimensional expression spaces using Multidimensional Scaling, and project two dimensions plane. The smallest adjacency distance to define adjacency expressions uses Pearson Correlation Coefficient. Users can control facial expressions of 3-dimensional avatar by using user interface while they navigates two dimension spaces by real-time.

Extraction and Implementation of MPEG-4 Facial Animation Parameter for Web Application (웹 응용을 위한 MPEC-4 얼굴 애니메이션 파라미터 추출 및 구현)

  • 박경숙;허영남;김응곤
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1310-1318
    • /
    • 2002
  • In this study, we developed a 3D facial modeler and animator that will not use the existing method by 3D scanner or camera. Without expensive image-input equipments, we can easily create 3D models only using front and side images. The system is available to animate 3D facial models as we connect to animation server on the WWW which is independent from specific platforms and softwares. It was implemented using Java 3D API. The facial modeler detects MPEG-4 FDP(Facial Definition Parameter) feature points from 2D input images, creates 3D facial model modifying generic facial model with the points. The animator animates and renders the 3D facial model according to MPEG-4 FAP(Facial Animation Parameter). This system can be used for generating an avatar on WWW.

Individual 3D facial avatar synthesis using elastic matching of facial mesh and image (얼굴 메쉬와 이미지의 동적 매칭을 이용한 개인 아바타의 3차원 얼굴 합성)

  • 강명진;김창헌
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.600-602
    • /
    • 1998
  • 본 논문은 정면과 측면 얼굴 이미지의 특성을 살린 3차원 개인 아바타 합성에 관한 연구이다. 표준 얼굴 메쉬를 얼굴 이미지의 특징점에 맞추려는 힘을 특징점 이외의 점들까지의 거리에 대한 가우스 분포를 따라 부드럽게 전달시켜 매쉬를 탄성있게 변형하는 힘으로 작용시켜 메쉬를 얼굴 이미지의 윤곽선을 중심으로 매칭시키고, 매칭된 메쉬가 매칭 이전의 메쉬의 기하학적 특성을 유지할 수 있도록 메쉬에 동적 피부 모델을 적용한다. 이렇게 생성한 3차원 메쉬에 이미지를 텍스춰 매핑하여 개인 특성을 살린 3차원 개인 아바타를 생성한다.

  • PDF

Lip and Voice Synchronization with SMS Messages for Mobile 3D Avatar (SMS 메시지에 따른 모바일 3D 아바타의 입술 모양과 음성 동기화)

  • Youn, Jae-Hong;Song, Yong-Gyu;Kim, Eun-Seok;Hur, Gi-Taek
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.682-686
    • /
    • 2006
  • There have been increasing interests in 3D mobile content service with emergence of a terminal equipping with a mobile 3D engine and growth of mobile content market. Mobile 3D Avatar is the most effective product displaying the character of a personalized mobile device user. However, previous studies on the method of expressing 3D Avatar have been mainly focused on natural and realistic expressions according to the change in facial expressions and lip shape of a character in PC based virtual environments. In this paper, we propose a method of synchronizing the lip shape with voice by applying a SMS message received in mobile environments to 3D mobile Avatar. The proposed method enables to realize a natural and effective SMS message reading service of mobile Avatar by disassembling a received message sentence into units of a syllable and then synchronizing the lip shape of 3D Avatar with the corresponding voice.

  • PDF

Emotion Recognition and Expression System of Robot Based on 2D Facial Image (2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템)

  • Lee, Dong-Hoon;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

SVM Based Facial Expression Recognition for Expression Control of an Avatar in Real Time (실시간 아바타 표정 제어를 위한 SVM 기반 실시간 얼굴표정 인식)

  • Shin, Ki-Han;Chun, Jun-Chul;Min, Kyong-Pil
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1057-1062
    • /
    • 2007
  • 얼굴표정 인식은 심리학 연구, 얼굴 애니메이션 합성, 로봇공학, HCI(Human Computer Interaction) 등 다양한 분야에서 중요성이 증가하고 있다. 얼굴표정은 사람의 감정 표현, 관심의 정도와 같은 사회적 상호작용에 있어서 중요한 정보를 제공한다. 얼굴표정 인식은 크게 정지영상을 이용한 방법과 동영상을 이용한 방법으로 나눌 수 있다. 정지영상을 이용할 경우에는 처리량이 적어 속도가 빠르다는 장점이 있지만 얼굴의 변화가 클 경우 매칭, 정합에 의한 인식이 어렵다는 단점이 있다. 동영상을 이용한 얼굴표정 인식 방법은 신경망, Optical Flow, HMM(Hidden Markov Models) 등의 방법을 이용하여 사용자의 표정 변화를 연속적으로 처리할 수 있어 실시간으로 컴퓨터와의 상호작용에 유용하다. 그러나 정지영상에 비해 처리량이 많고 학습이나 데이터베이스 구축을 위한 많은 데이터가 필요하다는 단점이 있다. 본 논문에서 제안하는 실시간 얼굴표정 인식 시스템은 얼굴영역 검출, 얼굴 특징 검출, 얼굴표정 분류, 아바타 제어의 네 가지 과정으로 구성된다. 웹캠을 통하여 입력된 얼굴영상에 대하여 정확한 얼굴영역을 검출하기 위하여 히스토그램 평활화와 참조 화이트(Reference White) 기법을 적용, HT 컬러모델과 PCA(Principle Component Analysis) 변환을 이용하여 얼굴영역을 검출한다. 검출된 얼굴영역에서 얼굴의 기하학적 정보를 이용하여 얼굴의 특징요소의 후보영역을 결정하고 각 특징점들에 대한 템플릿 매칭과 에지를 검출하여 얼굴표정 인식에 필요한 특징을 추출한다. 각각의 검출된 특징점들에 대하여 Optical Flow알고리즘을 적용한 움직임 정보로부터 특징 벡터를 획득한다. 이렇게 획득한 특징 벡터를 SVM(Support Vector Machine)을 이용하여 얼굴표정을 분류하였으며 추출된 얼굴의 특징에 의하여 인식된 얼굴표정을 아바타로 표현하였다.

  • PDF

Phased Visualization of Facial Expressions Space using FCM Clustering (FCM 클러스터링을 이용한 표정공간의 단계적 가시화)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.2
    • /
    • pp.18-26
    • /
    • 2008
  • This paper presents a phased visualization method of facial expression space that enables the user to control facial expression of 3D avatars by select a sequence of facial frames from the facial expression space. Our system based on this method creates the 2D facial expression space from approximately 2400 facial expression frames, which is the set of neutral expression and 11 motions. The facial expression control of 3D avatars is carried out in realtime when users navigate through facial expression space. But because facial expression space can phased expression control from radical expressions to detail expressions. So this system need phased visualization method. To phased visualization the facial expression space, this paper use fuzzy clustering. In the beginning, the system creates 11 clusters from the space of 2400 facial expressions. Every time the level of phase increases, the system doubles the number of clusters. At this time, the positions of cluster center and expression of the expression space were not equal. So, we fix the shortest expression from cluster center for cluster center. We let users use the system to control phased facial expression of 3D avatar, and evaluate the system based on the results.

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

A Study on the Visualization of Brand Personality by Utilizing the Avatar (아바타를 활용한 브랜드 개성의 시각화에 관한 연구)

  • Song, Min-Jeong;Chung, Kyung-Won
    • Archives of design research
    • /
    • v.19 no.1 s.63
    • /
    • pp.215-224
    • /
    • 2006
  • As the competition becomes more severe, the importance of brand confidence is coming to the front mainly because there has been a tendency of the customer for choosing a product or service in conjunction with the confidence. The concept of brand personality has formed as a result of various efforts for establishing a differentiated and confident brand image. The brand personality is regarded a useful mean for meeting objectives of a corporation for establishing a distinctive brand identity as well as customers for expressing their self-image. In line with the growing importance of the brand personality, researchers have attempted to measure it by various methods. However, most of researches were based on verbal and quantitative methods which take a long period of time and lots of efforts to analyze their results. Such methods also have limitations for visualizing the results. In this vein, this study aims to develop a new visible brand personality measurement system by utilizing the purpose-designed avatar. Major findings of the study are as follows: Firstly, the avatar can be an effective mean for visualizing the brand personality. As the avatar can visualize the personalities of human-beings through facial expressions, clothing, attitudes and movements, a specially designed avatar can express the brand personality. Secondly, types of the brand personality can be segregated into distinctive seven classes and such classes are used as guidelines for developing specially designed brand personality avatars. Thirdly, the purpose-designed brand personality avatar can be an effective mean for measuring the brand personality as a result of various tests for the validity. In condusion, avatar can be a more powerful tool than language for measuring the brand personality.

  • PDF