• Title/Summary/Keyword: lip motion

Search Result 34, Processing Time 0.026 seconds

Upper lip tie wrapping into the hard palate and anterior premaxilla causing alveolar hypoplasia

  • Heo, Woong;Ahn, Hee Chang
    • Archives of Craniofacial Surgery
    • /
    • v.19 no.1
    • /
    • pp.48-50
    • /
    • 2018
  • Bony anomaly caused by lip tie is not many reported yet. There was a case of upper lip tie wrapping into the anterior premaxilla. We represent a case of severe upper lip tie of limited lip motion, upper lips curling inside, and alveolar hypoplasia. Male patient was born on June 3, 2016. He had a deep philtral sulcus, low vermilion border and deep cupid's bow of upper lip due to tension of short, stout and very tight frenulum. His upper lip motion was severely restricted in particular lip eversion. There was anterior alveolar hypoplasia with deep sulcus in anterior maxilla. Resection of frenulum cord with Z-plasty was performed at anterior premaxilla and upper lip sulcus. Frenulum was tightly attached to gingiva through gum and into hard palate. Width of frenulum cord was about 1 cm, and length was about 3 cm. He gained upper lip contour including cupid's bow and normal vermilion border after the surgery. This case is severe upper lip tie showing the premaxillary hypoplasia, abnormal lip motion and contour for child. Although there is mild limitation of feeding with upper lip tie child, early detection and treatment are needed to correct bony growth.

Coarticulation Model of Hangul Visual speedh for Lip Animation (입술 애니메이션을 위한 한글 발음의 동시조음 모델)

  • Gong, Gwang-Sik;Kim, Chang-Heon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.9
    • /
    • pp.1031-1041
    • /
    • 1999
  • 기존의 한글에 대한 입술 애니메이션 방법은 음소의 입모양을 몇 개의 입모양으로 정의하고 이들을 보간하여 입술을 애니메이션하였다. 하지만 발음하는 동안의 실제 입술 움직임은 선형함수나 단순한 비선형함수가 아니기 때문에 보간방법에 의해 중간 움직임을 생성하는 방법으로는 음소의 입술 움직임을 효과적으로 생성할 수 없다. 또 이 방법은 동시조음도 고려하지 않아 음소들간에 변화하는 입술 움직임도 표현할 수 없었다. 본 논문에서는 동시조음을 고려하여 한글을 자연스럽게 발음하는 입술 애니메이션 방법을 제안한다. 비디오 카메라로 발음하는 동안의 음소의 움직임들을 측정하고 입술 움직임 제어 파라미터들을 추출한다. 각각의 제어 파라미터들은 L fqvist의 스피치 생성 제스처 이론(speech production gesture theory)을 이용하여 실제 음소의 입술 움직임에 근사한 움직임인 지배함수(dominance function)들로 정의되고 입술 움직임을 애니메이션할 때 사용된다. 또, 각 지배함수들은 혼합함수(blending function)와 반음절에 의한 한글 합성 규칙을 사용하여 결합하고 동시조음이 적용된 한글을 발음하게 된다. 따라서 스피치 생성 제스처 이론을 이용하여 입술 움직임 모델을 구현한 방법은 기존의 보간에 의해 중간 움직임을 생성한 방법보다 실제 움직임에 근사한 움직임을 생성하고 동시조음도 고려한 움직임을 보여준다.Abstract The existing lip animation method of Hangul classifies the shape of lips with a few shapes and implements the lip animation with interpolating them. However it doesn't represent natural lip animation because the function of the real motion of lips, during articulation, isn't linear or simple non-linear function. It doesn't also represent the motion of lips varying among phonemes because it doesn't consider coarticulation. In this paper we present a new coarticulation model for the natural lip animation of Hangul. Using two video cameras, we film the speaker's lips and extract the lip control parameters. Each lip control parameter is defined as dominance function by using L fqvist's speech production gesture theory. This dominance function approximates to the real lip animation of a phoneme during articulation of one and is used when lip animation is implemented. Each dominance function combines into blending function by using Hangul composition rule based on demi-syllable. Then the lip animation of our coarticulation model represents natural motion of lips. Therefore our coarticulation model approximates to real lip motion rather than the existing model and represents the natural lip motion considered coarticulation.

Real Time Speaker Close-Up System using The Lip Motion Informations (입술 움직임 정보를 이용한 실시간 화자 클로즈업 시스템 구현)

  • 권혁봉;장언동;윤태승;안재형
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.6
    • /
    • pp.510-517
    • /
    • 2001
  • In this paper, we implement a real time speaker close-up system using lip motion information from input images having some people. After detecting a speaker from input moving pictures through one color CCD camera, the other camera closes up the speaker by using lip motion information. The implemented system detects a face and lip area of each person by means of a facial color and a morphological information, and then finds out a speaker by using lip area variation. A PTZ(Pan/Tilt/Zoom) camera is used in order to close up the detected speaker and it is controlled by RS-232C serial port. Consequently, we can exactly detect a speaker in input moving pictures including more than three people.

  • PDF

Support Vector Machine Based Phoneme Segmentation for Lip Synch Application

  • Lee, Kun-Young;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.193-210
    • /
    • 2004
  • In this paper, we develop a real time lip-synch system that activates 2-D avatar's lip motion in synch with an incoming speech utterance. To realize the 'real time' operation of the system, we contain the processing time by invoking merge and split procedures performing coarse-to-fine phoneme classification. At each stage of phoneme classification, we apply the support vector machine (SVM) to reduce the computational load while retraining the desired accuracy. The coarse-to-fine phoneme classification is accomplished via two stages of feature extraction: first, each speech frame is acoustically analyzed for 3 classes of lip opening using Mel Frequency Cepstral Coefficients (MFCC) as a feature; secondly, each frame is further refined in classification for detailed lip shape using formant information. We implemented the system with 2-D lip animation that shows the effectiveness of the proposed two-stage procedure in accomplishing a real-time lip-synch task. It was observed that the method of using phoneme merging and SVM achieved about twice faster speed in recognition than the method employing the Hidden Markov Model (HMM). A typical latency time per a single frame observed for our method was in the order of 18.22 milliseconds while an HMM method applied under identical conditions resulted about 30.67 milliseconds.

  • PDF

Surgical Correction of Whistle Deformity Using Cross-Muscle Flap in Secondary Cleft Lip

  • Choi, Woo Young;Yang, Jeong Yeol;Kim, Gyu Bo;Han, Yun Ju
    • Archives of Plastic Surgery
    • /
    • v.39 no.5
    • /
    • pp.470-476
    • /
    • 2012
  • Background The whistle deformity is one of the common sequelae of secondary cleft lip deformities. Santos reported using a crossed-denuded flap for primary cleft lip repair to prevent a vermilion notching. The authors modified this technique to correct the whistle deformity, calling their version the cross-muscle flap. Methods From May 2005 to January 2011, 14 secondary unilateral cleft lip patients were treated. All suffered from a whistle deformity, which is characterized by the deficiency of the central tubercle, notching in the upper lip, and bulging on the lateral segment. The mean age of the patients was 13.8 years and the mean follow-up period was 21.8 weeks. After elevation from the lateral vermilion and medial tubercle, two muscle flaps were crossed and turned over. The authors measured the three vertical heights and compared the two height ratios before and after surgery for evaluation of the postoperative results. Results None of the patients had any notable complications and the whistle deformity was corrected in all cases. The vertical height ratios at the midline on the upper lip and the affected Cupid's bow point were increased (P<0.05). The motion of the upper lip was acceptable. Conclusions A cross muscle flap is simple and it leaves a minimal scar on the lip. We were able to reconstruct the whistle deformity in secondary unilateral cleft lip patients with a single state procedure using a cross-muscle flap.

Monosyllable Speech Recognition through Facial Movement Analysis (안면 움직임 분석을 통한 단음절 음성인식)

  • Kang, Dong-Won;Seo, Jeong-Woo;Choi, Jin-Seung;Choi, Jae-Bong;Tack, Gye-Rae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.6
    • /
    • pp.813-819
    • /
    • 2014
  • The purpose of this study was to extract accurate parameters of facial movement features using 3-D motion capture system in speech recognition technology through lip-reading. Instead of using the features obtained through traditional camera image, the 3-D motion system was used to obtain quantitative data for actual facial movements, and to analyze 11 variables that exhibit particular patterns such as nose, lip, jaw and cheek movements in monosyllable vocalizations. Fourteen subjects, all in 20s of age, were asked to vocalize 11 types of Korean vowel monosyllables for three times with 36 reflective markers on their faces. The obtained facial movement data were then calculated into 11 parameters and presented as patterns for each monosyllable vocalization. The parameter patterns were performed through learning and recognizing process for each monosyllable with speech recognition algorithms with Hidden Markov Model (HMM) and Viterbi algorithm. The accuracy rate of 11 monosyllables recognition was 97.2%, which suggests the possibility of voice recognition of Korean language through quantitative facial movement analysis.

Lip Reading Method Using CNN for Utterance Period Detection (발화구간 검출을 위해 학습된 CNN 기반 입 모양 인식 방법)

  • Kim, Yong-Ki;Lim, Jong Gwan;Kim, Mi-Hye
    • Journal of Digital Convergence
    • /
    • v.14 no.8
    • /
    • pp.233-243
    • /
    • 2016
  • Due to speech recognition problems in noisy environment, Audio Visual Speech Recognition (AVSR) system, which combines speech information and visual information, has been proposed since the mid-1990s,. and lip reading have played significant role in the AVSR System. This study aims to enhance recognition rate of utterance word using only lip shape detection for efficient AVSR system. After preprocessing for lip region detection, Convolution Neural Network (CNN) techniques are applied for utterance period detection and lip shape feature vector extraction, and Hidden Markov Models (HMMs) are then used for the recognition. As a result, the utterance period detection results show 91% of success rates, which are higher performance than general threshold methods. In the lip reading recognition, while user-dependent experiment records 88.5%, user-independent experiment shows 80.2% of recognition rates, which are improved results compared to the previous studies.

Effect of Articulation Abilities on the Articulator Strength Training by IOPI of Spasticity Dysarthric Speech (IOPI를 활용한 조음기관 훈련 프로그램이 경직형 마비말장애의 조음 능력에 미치는 영향)

  • Lee, Jang-Shin;Lee, Ji-Yun;Kim, Sun-Hee
    • Therapeutic Science for Rehabilitation
    • /
    • v.9 no.1
    • /
    • pp.91-99
    • /
    • 2020
  • Objective : The purpose of this study was to investigate the effects of the IOPI articulator strength training program on articulator(tongue and lip) muscle strength, numbers of /l, s, ʨ/ articulation accuracy, articulatory numbers, articulation regularity and accuracy in the alternate motion rates, and sequential motion rate changes in patients with spastic dysarthria. Methods : Three cases of patients with spastic dysarthria living in Jeju, Korea, were included in this study. A single subject design was selected to study changes in articulator(tongue and lip) muscle strength, numbers of /ㄹ, ㅅ, ㅈ/ articulation accuracy, articulatory numbers, articulation regularity and accuracy in the alternate motion rates and sequential motion rates. Results : After the articulator strength training program was conducted on patients with spastic dysarthria, there were positive changes in articulator(tongue and lip) muscle strength, numbers of /ㄹ, ㅅ, ㅈ/ articulation accuracy, articulatory numbers, articulation regularity and accuracy on the alternate motion rates and sequential motion rates. Conclusion : Our findings suggest that IOPI articulator strength training program could be very useful for the most representative childeren with cerebral palsy if conducted in various subtypes of dysarthric patients and linked with articulatory function training with IOPI at home.

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF

3D Character Production for Dialog Syntax-based Educational Contents Authoring System (대화구문기반 교육용 콘텐츠 저작 시스템을 위한 3D 캐릭터 제작)

  • Kim, Nam-Jae;Ryu, Seuc-Ho;Kyung, Byung-Pyo;Lee, Dong-Yeol;Lee, Wan-Bok
    • Journal of the Korea Convergence Society
    • /
    • v.1 no.1
    • /
    • pp.69-75
    • /
    • 2010
  • The importance of a using the visual media in English education has been increased. By an importance of Characters in English language content, the more effort is needed for a learner to show the English pronunciation and a realistic implementation. In this paper, we tried to review the Syntax-based Educational Contents Authoring System. For the more realistic lip-sync character, 3D character to enhance the efficiency of the education was constructed. We used a chart of the association structure analysis of mouth's shape. we produced an optimized 3D character through a process of a concept, a modeling, a mapping and an animating design. For more effective educational content for 3D character creation, the next research will be continuously a 3d Character added to a hand motion and body motion in order to show an effective communication example.