• 제목/요약/키워드: Facial animation

검색결과 142건 처리시간 0.027초

A study on the Development of Animated Graphic Chatting Program based on Facial Expression (표정과 제스처에 기반한 대화기법을 활용한 Animated Graphic Chatting 프로그램 개발 연구)

  • 안상혁;정진오
    • Archives of design research
    • /
    • 제12권4호
    • /
    • pp.129-137
    • /
    • 1999
  • We see that the Interactive Entertainment Industry on the internet has a great potential to grow in the 21st century. There will be many kinds of internet content. Internet contents can be classified with six categories such as identity, entertainment, learning, shopping, community. However, we see that shopping and community have a higher market share among the six. Recently, community has become more important in making an internet business sucessful, because it is valuable in creating virtual society in cyber space. A chatting program has been an effective means to form community. So we developed the animation graphic chatting program that functions to express an emotion effectively compared to the text chatting.

  • PDF

A Study on 3D Character Design for Games (About Improvement efficiency with 2D Graphics) (3D Game 제작을 위한 Character Design에 관한 연구 (3D와 2D Graphics의 결합효율성에 관하여))

  • Cho, Dong-Min;Jung, Sung-Hwan
    • Journal of Korea Multimedia Society
    • /
    • 제10권10호
    • /
    • pp.1310-1318
    • /
    • 2007
  • First of all, What was the modeling technique used to model 3D-Game character? It's a technique developed along several years, by experience... here is the bases Low polygons characters I always work in low polygon for two reasons -You can easily modify a low-poly character, change shapes, make morph for facial expressions etc -You can easily animate a low-poly character When the modeling is finished, Second, In these days, Computer hardware technologies have been bring about that expansion of various 3D digital motion pictured information and development. 3D digital techniques can be used to be diversity in Animation, Virtual-Reality, Movie, Advertisement, Game and so on. Besides, as computing power has been better and higher, the development of 3D Animations and Character are required gradually. In order to satisfy the requirement, Research about how to make 3D Game modeling that represents Character's emotions, sensibilities, is beginning to set its appearance. 3D characters in 3D Games are the core for the communications of emotion and the informations through their facial expression and characteristic motions, Sounds to Users. All concerning about 3D motion and facial expression are getting higher with extension of frequency in use. Therefore, in this study we suggest the effective method of modeling for 3D character and which are based on 2D Graphics.

  • PDF

A Comic Facial Expression Method for Intelligent Avatar Communications in the Internet Cyberspace (인터넷 가상공간에서 지적 아바타 통신을 위한 코믹한 얼굴 표정의 생성법)

  • 이용후;김상운;청목유직
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • 제40권1호
    • /
    • pp.59-73
    • /
    • 2003
  • As a means of overcoming the linguistic barrier between different languages in the Internet, a new sign-language communication system with CG animation techniques has been developed and proposed. In the system, the joint angles of the arms and the hands corresponding to the gesture as a non-verbal communication tool have been considered. The emotional expression, however, could as play also an important role in communicating each other. Especially, a comic expression is more efficient than real facial expression, and the movements of the cheeks and the jaws are more important AU's than those of the eyebrow, eye, mouth etc. Therefore, in this paper, we designed a 3D emotion editor using 2D model, and we extract AU's (called as PAU, here) which play a principal function in expressing emotions. We also proposed a method of generating the universal emotional expression with Avatar models which have different vertex structures. Here, we employed a method of dynamically adjusting the AU movements according to emotional intensities. The proposed system is implemented with Visual C++ and Open Inventor on windows platforms. Experimental results show a possibility that the system could be used as a non-verbal communication means to overcome the linguistic barrier.

Development of 'Children's Food Avatar' Application for Dietary Education (식생활교육용 '어린이 푸드 아바타' 애플리케이션 개발)

  • Cho, Joo-Han;Kim, Sook-Bae;Kim, Soon-Kyung;Kim, Mi-Hyun;Kim, Gap-Soo;Kim, Se-Na;Kim, So-Young;Kim, Jeong-Weon
    • Korean Journal of Community Nutrition
    • /
    • 제18권4호
    • /
    • pp.299-311
    • /
    • 2013
  • An educational application (App) called 'Children's Food Avatar' was developed in this study by using a food DB of nutrition and functionality from Rural Development Administration (RDA) as a smart-learning mobile device for elementary school students. This App was designed for the development of children's desirable dietary habits through an on-line activity of food choices for a meal from food DB of RDA provided as Green Water Mill guide. A customized avatar system was introduced as an element of fun and interactive animation for children which provides nutritional evaluation of selected foods by changing its appearance, facial look, and speech balloon, and consequently providing chances of correcting their food choices for balanced diet. In addition, nutrition information menu was included in the App to help children understand various nutrients, their function and healthy dietary life. When the App was applied to 54 elementary school students for a week in November, 2012, significant increases in the levels of knowledge, attitude and behavior in their diet were observed compared with those of the control group (p < 0.05, 0.01). Both elementary students and teachers showed high levels of satisfaction ranging from 4.30 to 4.89 for the App, therefore, it could be widely used for the dietary education for elementary school students as a smart-learning device.

(<한국어 립씽크를 위한 3D 디자인 시스템 연구>)

  • Shin, Dong-Sun;Chung, Jin-Oh
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2006년도 학술대회 2부
    • /
    • pp.362-369
    • /
    • 2006
  • 3 차원 그래픽스에 적용하는 한국어 립씽크 합성 체계를 연구하여, 말소리에 대응하는 자연스러운 립씽크를 자동적으로 생성하도록 하는 디자인 시스템을 연구 개발하였다. 페이셜애니메이션은 크게 나누어 감정 표현, 즉 표정의 애니메이션과 대화 시 입술 모양의 변화를 중심으로 하는 대화 애니메이션 부분으로 구분할 수 있다. 표정 애니메이션의 경우 약간의 문화적 차이를 제외한다면 거의 세계 공통의 보편적인 요소들로 이루어지는 반면 대화 애니메이션의 경우는 언어에 따른 차이를 고려해야 한다. 이와 같은 문제로 인해 영어권 및 일본어 권에서 제안되는 음성에 따른 립싱크 합성방법을 한국어에 그대로 적용하면 청각 정보와 시각 정보의 부조화로 인해 지각의 왜곡을 일으킬 수 있다. 본 연구에서는 이와 같은 문제점을 해결하기 위해 표기된 텍스트를 한국어 발음열로 변환, HMM 알고리듬을 이용한 입력 음성의 시분할, 한국어 음소에 따른 얼굴특징점의 3 차원 움직임을 정의하는 과정을 거쳐 텍스트와 음성를 통해 3 차원 대화 애니메이션을 생성하는 한국어 립싱크합성 시스템을 개발 실제 캐릭터 디자인과정에 적용하도록 하였다. 또한 본 연구는 즉시 적용이 가능한 3 차원 캐릭터 애니메이션뿐만 아니라 아바타를 활용한 동적 인터페이스의 요소기술로서 사용될 수 있는 선행연구이기도 하다. 즉 3 차원 그래픽스 기술을 활용하는 영상디자인 분야와 HCI 에 적용할 수 있는 양면적 특성을 지니고 있다. 휴먼 커뮤니케이션은 언어적 대화 커뮤니케이션과 시각적 표정 커뮤니케이션으로 이루어진다. 즉 페이셜애니메이션의 적용은 보다 인간적인 휴먼 커뮤니케이션의 양상을 지니고 있다. 결국 인간적인 상호작용성이 강조되고, 보다 편한 인간적 대화 방식의 휴먼 인터페이스로 그 미래적 양상이 변화할 것으로 예측되는 아바타를 활용한 인터페이스 디자인과 가상현실 분야에 보다 폭넓게 활용될 수 있다.

  • PDF

A Study on The Expression of Digital Eye Contents for Emotional Communication (감성 커뮤니케이션을 위한 디지털 눈 콘텐츠 표현 연구)

  • Lim, Yoon-Ah;Lee, Eun-Ah;Kwon, Jieun
    • Journal of Digital Convergence
    • /
    • 제15권12호
    • /
    • pp.563-571
    • /
    • 2017
  • The purpose of this paper is to establish an emotional expression factors of digital eye contents that can be applied to digital environments. The emotion which can be applied to the smart doll is derived and we suggest guidelines for expressive factors of each emotion. For this paper, first, we research the concepts and characteristics of emotional expression are shown in eyes by the publications, animation and actual video. Second, we identified six emotions -Happy, Angry, Sad, Relaxed, Sexy, Pure- and extracted the emotional expression factors. Third, we analyzed the extracted factors to establish guideline for emotional expression of digital eyes. As a result, this study found that the factors to distinguish and represent each emotion are classified four categories as eye shape, gaze, iris size and effect. These can be used as a way to enhance emotional communication effects such as digital contents including animations, robots and smart toys.

CORRECTION OF SECONDARY LIP DEFORMITIES IN CLEFT PATIENTS (구순열 환자의 이차 구순 성형술)

  • Kim, Jong-Ryoul;Byun, June-Ho
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • 제21권4호
    • /
    • pp.401-406
    • /
    • 1999
  • Secondary deformities of the lip and nose in individuals with repaired unilateral and bilateral clefts may vary in severity, depending on the state of the original defect, the care taken in the initial surgical procedure, the pattern of the patient's facial growth, and the effectiveness of interceptive orthodontic technique. Because each patient has a unique combination of deformities, their surgical reconstruction usually requires the modification and combination of several surgical techniques. Residual lip deformities after primary repair may be esthetic or functional and include scars, skin shortage or excess(vertical and transverse), orbicularis oris muscle malposition or diastasis. The key to accurate repair of secondary cleft lip deformities is a precise diagnosis. This requires observation of the patient in animation and repose. The quality of the scar is not the only factor determining the overall appearance of the lip. Observing the patient in the animated position is critical to assess muscular function. Factors that require precise analysis include lip length, the appearance of the Cupid's bow and philtrum, and nasal symmetry. Only after this detailed analysis can a decision be made as to wether a major or minor deformity exists. We report successful cases using various techniques for the secondary lip deformities.

  • PDF

Simultaneous Simplification of Multiple Triangle Meshes for Blend Shape (블렌드쉐입을 위한 다수 삼각 메쉬의 동시 단순화 기법)

  • Park, Jung-Ho;Kim, Jongyong;Song, Jonghun;Park, Sanghun;Yoon, Seung-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • 제25권3호
    • /
    • pp.75-83
    • /
    • 2019
  • In this paper we present a new technique for simultaneously simplifying N triangule meshes with the same number of vertices and the same connectivities. Applying the existing simplification technique to each of the N triangule mesh creates a simplified mesh with the same number of vertices but different connectivities. These limits make it difficult to construct a simplified blend-shape model in a high-resolution blend-shape model. The technique presented in this paper takes into account the N meshes simultaneously and performs simplification by selecting an edge with minimal removal cost. Thus, the N simplified meshes generated as a result of the simplification retain the same number of vertices and the same connectivities. The efficiency and effectiveness of the proposed technique is demonstrated by applying simultaneous simplification technique to multiple triangle meshes.

Pose Transformation of a Frontal Face Image by Invertible Meshwarp Algorithm (역전가능 메쉬워프 알고리즘에 의한 정면 얼굴 영상의 포즈 변형)

  • 오승택;전병환
    • Journal of KIISE:Software and Applications
    • /
    • 제30권1_2호
    • /
    • pp.153-163
    • /
    • 2003
  • In this paper, we propose a new technique of image based rendering(IBR) for the pose transformation of a face by using only a frontal face image and its mesh without a three-dimensional model. To substitute the 3D geometric model, first, we make up a standard mesh set of a certain person for several face sides ; front. left, right, half-left and half-right sides. For the given person, we compose only the frontal mesh of the frontal face image to be transformed. The other mesh is automatically generated based on the standard mesh set. And then, the frontal face image is geometrically transformed to give different view by using Invertible Meshwarp Algorithm, which is improved to tolerate the overlap or inversion of neighbor vertexes in the mesh. The same warping algorithm is used to generate the opening or closing effect of both eyes and a mouth. To evaluate the transformation performance, we capture dynamic images from 10 persons rotating their heads horizontally. And we measure the location error of 14 main features between the corresponding original and transformed facial images. That is, the average difference is calculated between the distances from the center of both eyes to each feature point for the corresponding original and transformed images. As a result, the average error in feature location is about 7.0% of the distance from the center of both eyes to the center of a mouth.

A Study on the Formative Characteristics of Character Design : Focusing on Body Proportion (캐릭터 디자인의 조형적 특성에 관한 연구 -신체비례를 중심으로-)

  • Jung, Hye Kyungg
    • Journal of the Korean Society of Floral Art and Design
    • /
    • 제41호
    • /
    • pp.45-59
    • /
    • 2019
  • The characters that could be connected to diverse cultural contents have formed diverse platforms with the development of digital technology, and the size of the relevant industry and market is rapidly growing. Recently, the utilization of character emoticons for smartphone messenger has been rapidly increased, so that the characters are settled down as a tool for non-verbal communication, on top of drawing attention as an independent area. With the expansion of character market, the importance of design that could give interest and familiarity to consumers is more emphasized. The body proportion of characters includes the implicative and symbolic meanings that could express diverse personalities. Thus, this study examined the body proportion of the characters with the high consumers' preference, and then analyzed the characteristics of formative elements of character design in accordance with the body proportion. In the results of the analysis, the exaggerated form of SD characters in two or three-head figure, and the realistic Real characters in seven or eight-head figure were preferred. For the SD characters, the colors with a high chroma showing the cute and cheerful image were used. For the Real characters, the cubic effect was expressed through the colors with active images and the light and shade of color. Even though the SD characters have limited motions due to the omitted body parts, the facial movements of animation characters are exaggerated while the Real characters describe the realistic and dynamic motions.