• Title/Summary/Keyword: Lip-Sync animation

Search Result 12, Processing Time 0.02 seconds

Production of Lip-sync Animation, 3D Character in Dialogue-Based Image Contents Work System by Utilizing Morphing Technique (Morphing 기법을 활용한 대화구문기반 영상 콘텐츠 저작도구 시스템 내 3D 캐릭터 Lip-sync Animation제작)

  • Jung, Won-Joe;Lee, Dong-Lyeor;Ryu, Seuc-Ho;Kyung, Byung-Pyo;Lee, Wan-Bok
    • Journal of Digital Convergence
    • /
    • v.10 no.7
    • /
    • pp.253-259
    • /
    • 2012
  • In this study, the dialog syntax-based video content production flow for the character set, 'Form Noah' chart using the mouse, lip-sync Animation been making 3D characters were applying. Vertex Animation Morphing techniques by expressing the natural shape of the mouth for the characters engaging and the transmission of visual information for the viewers to be able to get a high intelligibility is considered.

A Study on Korean Lip-Sync for Animation Characters - Based on Lip-Sync Technique in English-Speaking Animations (애니메이션 캐릭터의 한국어 립싱크 연구 : 영어권 애니메이션의 립싱크 기법을 기반으로)

  • Kim, Tak-Hoon
    • Cartoon and Animation Studies
    • /
    • s.13
    • /
    • pp.97-114
    • /
    • 2008
  • This study aims to study mouth shapes suitable to the shapes of Korean consonants and vowels for Korean animations by analyzing the process of English-speaking animation lip-sync based on pre-recording in the United States. A research was conducted to help character animators understand the concept of Korean lip-sync which is done after recording and to introduce minimum, basic mouth shapes required for Korean expressions which can be applied to various characters. In the introduction, this study mentioned the necessity of Korean lip-sync in local animations and introduced the research methods of Korean lip-sync data based on English lip-sync data by laking an American production as an example. In the main subject, this study demonstrated the characteristics and roles of 8 basic mouth shapes required for English pronunciation expressions, left out mouth shapes that are required for English expressions but not for Korean expressions, and in contrast, added mouth shapes required for Korean expressions but not for English expressions. Based on these results, this study made a diagram for the mouth shapes of Korean expressions by laking various examples and made a research on how mouth shapes vary when they are used as consonants, vowels and batchim. In audition, the case study proposed a method to transfer lines to the exposure sheet and a method to arrange mouth shapes according to lip-sync for practical animation production. However, lines from a Korean movie were inevitably used as an example because there has not been any precedents in Korea about animation production with systematic Korean lip-sync data.

  • PDF

Development of Automatic Lip-sync MAYA Plug-in for 3D Characters (3D 캐릭터에서의 자동 립싱크 MAYA 플러그인 개발)

  • Lee, Sang-Woo;Shin, Sung-Wook;Chung, Sung-Taek
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.3
    • /
    • pp.127-134
    • /
    • 2018
  • In this paper, we have developed the Auto Lip-Sync Maya plug-in for extracting Korean phonemes from voice data and text information based on Korean and produce high quality 3D lip-sync animation using divided phonemes. In the developed system, phoneme separation was classified into 8 vowels and 13 consonants used in Korean, referring to 49 phonemes provided by Microsoft Speech API engine SAPI. In addition, the pronunciation of vowels and consonants has variety Mouth Shapes, but the same Viseme can be applied to some identical ones. Based on this, we have developed Auto Lip-sync Maya Plug-in based on Python to enable lip-sync animation to be implemented automatically at once.

Speaker Adapted Real-time Dialogue Speech Recognition Considering Korean Vocal Sound System (한국어 음운체계를 고려한 화자적응 실시간 단모음인식에 관한 연구)

  • Hwang, Seon-Min;Yun, Han-Kyung;Song, Bok-Hee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.6 no.4
    • /
    • pp.201-207
    • /
    • 2013
  • Voice Recognition technique has been developed and it has been actively applied to various information devices such as smart phones and car navigation system. But the basic research technique related the speech recognition is based on research results in English. Since the lip sync producing generally requires tedious hand work of animators and it serious affects the animation producing cost and development period to get a high quality lip animation. In this research, a real time processed automatic lip sync algorithm for virtual characters in digital contents is studied by considering Korean vocal sound system. This suggested algorithm contributes to produce a natural lip animation with the lower producing cost and the shorter development period.

A Study on Lip Sync and Facial Expression Development in Low Polygon Character Animation (로우폴리곤 캐릭터 애니메이션에서 립싱크 및 표정 개발 연구)

  • Ji-Won Seo;Hyun-Soo Lee;Min-Ha Kim;Jung-Yi Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.409-414
    • /
    • 2023
  • We described how to implement character expressions and animations that play an important role in expressing emotions and personalities in low-polygon character animation. With the development of the video industry, character expressions and mouth-shaped lip-syncing in animation can realize natural movements at a level close to real life. However, for non-experts, it is difficult to use expert-level advanced technology. Therefore, We aimed to present a guide for low-budget low-polygon character animators or non-experts to create mouth-shaped lip-syncing more naturally using accessible and highly usable features. A total of 8 mouth shapes were developed for mouth shape lip-sync animation: 'ㅏ', 'ㅔ', 'ㅣ', 'ㅗ', 'ㅜ', 'ㅡ', 'ㅓ' and a mouth shape that expresses a labial consonant. In the case of facial expression animation, a total of nine animations were produced by adding highly utilized interest, boredom, and pain to the six basic human emotions classified by Paul Ekman: surprise, fear, disgust, anger, happiness, and sadness. This study is meaningful in that it makes it easy to produce natural animation using the features built into the modeling program without using complex technologies or programs.

3D Character Production for Dialog Syntax-based Educational Contents Authoring System (대화구문기반 교육용 콘텐츠 저작 시스템을 위한 3D 캐릭터 제작)

  • Kim, Nam-Jae;Ryu, Seuc-Ho;Kyung, Byung-Pyo;Lee, Dong-Yeol;Lee, Wan-Bok
    • Journal of the Korea Convergence Society
    • /
    • v.1 no.1
    • /
    • pp.69-75
    • /
    • 2010
  • The importance of a using the visual media in English education has been increased. By an importance of Characters in English language content, the more effort is needed for a learner to show the English pronunciation and a realistic implementation. In this paper, we tried to review the Syntax-based Educational Contents Authoring System. For the more realistic lip-sync character, 3D character to enhance the efficiency of the education was constructed. We used a chart of the association structure analysis of mouth's shape. we produced an optimized 3D character through a process of a concept, a modeling, a mapping and an animating design. For more effective educational content for 3D character creation, the next research will be continuously a 3d Character added to a hand motion and body motion in order to show an effective communication example.

Multicontents Integrated Image Animation within Synthesis for Hiqh Quality Multimodal Video (고화질 멀티 모달 영상 합성을 통한 다중 콘텐츠 통합 애니메이션 방법)

  • Jae Seung Roh;Jinbeom Kang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.257-269
    • /
    • 2023
  • There is currently a burgeoning demand for image synthesis from photos and videos using deep learning models. Existing video synthesis models solely extract motion information from the provided video to generate animation effects on photos. However, these synthesis models encounter challenges in achieving accurate lip synchronization with the audio and maintaining the image quality of the synthesized output. To tackle these issues, this paper introduces a novel framework based on an image animation approach. Within this framework, upon receiving a photo, a video, and audio input, it produces an output that not only retains the unique characteristics of the individuals in the photo but also synchronizes their movements with the provided video, achieving lip synchronization with the audio. Furthermore, a super-resolution model is employed to enhance the quality and resolution of the synthesized output.

Development of a Lipsync Algorithm Based on Audio-visual Corpus (시청각 코퍼스 기반의 립싱크 알고리듬 개발)

  • 김진영;하영민;이화숙
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.63-69
    • /
    • 2001
  • A corpus-based lip sync algorithm for synthesizing natural face animation is proposed in this paper. To get the lip parameters, some marks were attached some marks to the speaker's face, and the marks' positions were extracted with some Image processing methods. Also, the spoken utterances were labeled with HTK and prosodic information (duration, pitch and intensity) were analyzed. An audio-visual corpus was constructed by combining the speech and image information. The basic unit used in our approach is syllable unit. Based on this Audio-visual corpus, lip information represented by mark's positions was synthesized. That is. the best syllable units are selected from the audio-visual corpus and each visual information of selected syllable units are concatenated. There are two processes to obtain the best units. One is to select the N-best candidates for each syllable. The other is to select the best smooth unit sequences, which is done by Viterbi decoding algorithm. For these process, the two distance proposed between syllable units. They are a phonetic environment distance measure and a prosody distance measure. Computer simulation results showed that our proposed algorithm had good performances. Especially, it was shown that pitch and intensity information is also important as like duration information in lip sync.

  • PDF

(<한국어 립씽크를 위한 3D 디자인 시스템 연구>)

  • Shin, Dong-Sun;Chung, Jin-Oh
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02b
    • /
    • pp.362-369
    • /
    • 2006
  • 3 차원 그래픽스에 적용하는 한국어 립씽크 합성 체계를 연구하여, 말소리에 대응하는 자연스러운 립씽크를 자동적으로 생성하도록 하는 디자인 시스템을 연구 개발하였다. 페이셜애니메이션은 크게 나누어 감정 표현, 즉 표정의 애니메이션과 대화 시 입술 모양의 변화를 중심으로 하는 대화 애니메이션 부분으로 구분할 수 있다. 표정 애니메이션의 경우 약간의 문화적 차이를 제외한다면 거의 세계 공통의 보편적인 요소들로 이루어지는 반면 대화 애니메이션의 경우는 언어에 따른 차이를 고려해야 한다. 이와 같은 문제로 인해 영어권 및 일본어 권에서 제안되는 음성에 따른 립싱크 합성방법을 한국어에 그대로 적용하면 청각 정보와 시각 정보의 부조화로 인해 지각의 왜곡을 일으킬 수 있다. 본 연구에서는 이와 같은 문제점을 해결하기 위해 표기된 텍스트를 한국어 발음열로 변환, HMM 알고리듬을 이용한 입력 음성의 시분할, 한국어 음소에 따른 얼굴특징점의 3 차원 움직임을 정의하는 과정을 거쳐 텍스트와 음성를 통해 3 차원 대화 애니메이션을 생성하는 한국어 립싱크합성 시스템을 개발 실제 캐릭터 디자인과정에 적용하도록 하였다. 또한 본 연구는 즉시 적용이 가능한 3 차원 캐릭터 애니메이션뿐만 아니라 아바타를 활용한 동적 인터페이스의 요소기술로서 사용될 수 있는 선행연구이기도 하다. 즉 3 차원 그래픽스 기술을 활용하는 영상디자인 분야와 HCI 에 적용할 수 있는 양면적 특성을 지니고 있다. 휴먼 커뮤니케이션은 언어적 대화 커뮤니케이션과 시각적 표정 커뮤니케이션으로 이루어진다. 즉 페이셜애니메이션의 적용은 보다 인간적인 휴먼 커뮤니케이션의 양상을 지니고 있다. 결국 인간적인 상호작용성이 강조되고, 보다 편한 인간적 대화 방식의 휴먼 인터페이스로 그 미래적 양상이 변화할 것으로 예측되는 아바타를 활용한 인터페이스 디자인과 가상현실 분야에 보다 폭넓게 활용될 수 있다.

  • PDF

Face Animation Editor for the Korean Lip_Sync and Face Expression (한글 입술 움직임과 얼굴 표정 동기화를 위한 얼굴 애니메이션 편집기)

  • 송미영;조형제
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2000.11a
    • /
    • pp.451-454
    • /
    • 2000
  • 본 논문은 한글 단어에 따른 한글 발음에 적합한 입술의 움직임을 자동 생성하며 또한 단어에 적절한 얼굴 보정을 생성할 수 있는 입순 움직임과 얼굴 표정을 동기화하는 3차인 일관애니메이션 편집기를 구축하였다. 얼굴 애니메이션 편집기에서 얼굴 표정은 근육 기반 모델 방법으로 정의된 각 얼굴 부위별 근육에 따라 가중치를 조절하여 생성하여 입술 움직임은 텍스트 구동 방법으로 음소에 따른 정의된 입모양 연속적으로 표현하여 동작한다. 또한 이렇게 생성된 얼굴 표정을 저장관리한다. 따라서 3차원 얼굴 애니메이션 편집기는 6가지의 기본 얼굴 표정을 자동적으로 생성할 수 있으며 또한 입력 단어에 적합하도록 각 얼굴 부위별 근육 움직임을 편집한 수 있다. 이렇게 생성된 얼굴 표정들은 데이터베이스에 저장관리할 수 있으며 컴퓨터 대화시 자동적으로 입력 단어에 적합한 입술의 움직임과 얼굴 표정을 동기화하여 자연스러운 3차원 얼굴 애니메이션을 표현할 수 있다.

  • PDF