• Title/Summary/Keyword: lip information

Search Result 195, Processing Time 0.027 seconds

Importance of various skin sutures in cheiloplasty of cleft lip

  • Kim, Soung Min
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.45 no.6
    • /
    • pp.374-376
    • /
    • 2019
  • Last week, after our receiving online journal regarding Journal of the Korean Association of Oral and Maxillofacial Surgeons, we found a recently published original article by Alawode et al., entitled "A comparative study of immediate wound healing complications following cleft lip repair using either absorbable or non-absorbable skin sutures". Although this clinical article was well written and provided a great deal of information regarding the suture materials in the cleft lip repair, I would like to add a few additional comments based on the importance of skin suture during cheiloplasties in the primary cleft lip or secondary revision patients with representative figures.

Implementation of a Multimodal Controller Combining Speech and Lip Information (음성과 영상정보를 결합한 멀티모달 제어기의 구현)

  • Kim, Cheol;Choi, Seung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.6
    • /
    • pp.40-45
    • /
    • 2001
  • In this paper, we implemented a multimodal system combining speech and lip information, and evaluated its performance. We designed speech recognizer using speech information and lip recognizer using image information. Both recognizers were based on HMM recognition engine. As a combining method we adopted the late integration method in which weighting ratio for speech and lip is 8:2. By the way, Our constructed multi-modal recognition system was ported on DARC system. That is, our system was used to control Comdio of DARC. The interrace between DARC and our system was done with TCP/IP socked. The experimental results of controlling Comdio showed that lip recognition can be used for an auxiliary means of speech recognizer by improving the rate of the recognition. Also, we expect that multi-model system will be successfully applied to o traffic information system and CNS (Car Navigation System).

  • PDF

A Study on Enhancing the Performance of Detecting Lip Feature Points for Facial Expression Recognition Based on AAM (AAM 기반 얼굴 표정 인식을 위한 입술 특징점 검출 성능 향상 연구)

  • Han, Eun-Jung;Kang, Byung-Jun;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.299-308
    • /
    • 2009
  • AAM(Active Appearance Model) is an algorithm to extract face feature points with statistical models of shape and texture information based on PCA(Principal Component Analysis). This method is widely used for face recognition, face modeling and expression recognition. However, the detection performance of AAM algorithm is sensitive to initial value and the AAM method has the problem that detection error is increased when an input image is quite different from training data. Especially, the algorithm shows high accuracy in case of closed lips but the detection error is increased in case of opened lips and deformed lips according to the facial expression of user. To solve these problems, we propose the improved AAM algorithm using lip feature points which is extracted based on a new lip detection algorithm. In this paper, we select a searching region based on the face feature points which are detected by AAM algorithm. And lip corner points are extracted by using Canny edge detection and histogram projection method in the selected searching region. Then, lip region is accurately detected by combining color and edge information of lip in the searching region which is adjusted based on the position of the detected lip corners. Based on that, the accuracy and processing speed of lip detection are improved. Experimental results showed that the RMS(Root Mean Square) error of the proposed method was reduced as much as 4.21 pixels compared to that only using AAM algorithm.

Comparison of the 3D Digital Photogrammetry and Direct Anthropometry in Unilateral Cleft Lip Patients (일측성 구순열 환자에서 3차원 수치사진측량 스캔과 직접계측 방법의 비교)

  • Seok, Hyo Hyun;Kwon, Geun-Yong;Baek, Seung-Hak;Choi, Tae Hyun;Kim, Sukwha
    • Archives of Craniofacial Surgery
    • /
    • v.14 no.1
    • /
    • pp.11-15
    • /
    • 2013
  • Background: In cleft lip patients, the necessity of a thorough preoperative analysis of facial deformities before reconstruction is unquestioned. The surgical plan of cleft lip patient is based on the information gained from our preoperative anthropometric evaluation. A variety of commercially available three-dimensional (3D) surface imaging systems are currently introduced to us in plastic surgery for these use. However, few studies have been published on the soft tissue morphology of unrepaired cleft infants described by these 3D surface imaging systems. Methods: The purpose of this study is to determine the accuracy of facial anthropometric measurements obtained through digital 3D photogrammetry and to compare with direct anthropometry for measurement in unilateral cleft lip patients. We compared our patients with three measurements of dimension made on both sides: heminasal width, labial height, and transverse lip length. Results: The preoperative measurements were not significantly different in both side of labial height and left side of heminasal width. Statistically significant differences were found on both side of transverse lip length and right side of heminasal width. Although the half of preoperative measurements were significantly different, trends of results showed average results were comparable. Conclusion: This is the first study in Korea to simultaneously compare digital 3D photogrammetry with traditional direct anthropometry in unilateral cleft lip patients. We desire this study could contribute the methodological choice of the many researchers for proper surgical planning in cleft lip reconstruction field.

3D model for korean-japanese sign language image communication (한-일 수화 영상통신을 위한 3차원 모델)

  • ;;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.929-932
    • /
    • 1998
  • In this paper we propose a method of representing emotional experessions and lip shapes for sign language communication using 3-dimensional model. At first we employ the action units (AU) of facial action coding system(FACS) to display all shapes. Then we define 11 basic lip shapes and sounding times of each components in a syllable in order to synthesize the lip shapes more precisely for korean characters. Experimental results show that the proposed method could be used efficiently for the sign language image communication between different languages.

  • PDF

Lip-reading System based on Bayesian Classifier (베이지안 분류를 이용한 립 리딩 시스템)

  • Kim, Seong-Woo;Cha, Kyung-Ae;Park, Se-Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • Pronunciation recognition systems that use only video information and ignore voice information can be applied to various customized services. In this paper, we develop a system that applies a Bayesian classifier to distinguish Korean vowels via lip shapes in images. We extract feature vectors from the lip shapes of facial images and apply them to the designed machine learning model. Our experiments show that the system's recognition rate is 94% for the pronunciation of 'A', and the system's average recognition rate is approximately 84%, which is higher than that of the CNN tested for comparison. Our results show that our Bayesian classification method with feature values from lip region landmarks is efficient on a small training set. Therefore, it can be used for application development on limited hardware such as mobile devices.

Development of Automatic Lip-sync MAYA Plug-in for 3D Characters (3D 캐릭터에서의 자동 립싱크 MAYA 플러그인 개발)

  • Lee, Sang-Woo;Shin, Sung-Wook;Chung, Sung-Taek
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.3
    • /
    • pp.127-134
    • /
    • 2018
  • In this paper, we have developed the Auto Lip-Sync Maya plug-in for extracting Korean phonemes from voice data and text information based on Korean and produce high quality 3D lip-sync animation using divided phonemes. In the developed system, phoneme separation was classified into 8 vowels and 13 consonants used in Korean, referring to 49 phonemes provided by Microsoft Speech API engine SAPI. In addition, the pronunciation of vowels and consonants has variety Mouth Shapes, but the same Viseme can be applied to some identical ones. Based on this, we have developed Auto Lip-sync Maya Plug-in based on Python to enable lip-sync animation to be implemented automatically at once.

(Lip Recognition Using Active Shape Model and Gaussian Mixture Model) (Active Shape 모델과 Gaussian Mixture 모델을 이용한 입술 인식)

  • 장경식;이임건
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.5_6
    • /
    • pp.454-460
    • /
    • 2003
  • In this paper, we propose an efficient method for recognizing human lips. Based on Point Distribution Model, a lip shape is represented as a set of points. We calculate a lip model and the distribution of shape parameters using Principle Component Analysis and Gaussian mixture, respectively. The Expectation Maximization algorithm is used to determine the maximum likelihood parameter of Gaussian mixture. The lip contour model is derived by using the gray value changes at each point and in regions around the point and used to search the lip shape in a image. The experiments have been performed for many images, and show very encouraging result.

Prenatal ultrasonographic diagnosis of cleft lip with or without cleft palate; pitfalls and considerations

  • Kim, Dong Wook;Chung, Seung-Won;Jung, Hwi-Dong;Jung, Young-Soo
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.37
    • /
    • pp.24.1-24.5
    • /
    • 2015
  • Ultrasonographic examination is widely used for screening of abnormal findings on prenatal screening. Cleft lip with or without cleft palate of the fetus can also be screened by using ultrasonography. Presence of abnormal findings of the fetal lip or palate can be detected by the imaging professionals. However, such findings may not be familiar to oral and maxillofacial surgeons. Oral and maxillofacial surgeons can use ultrasonographic imaging of fetal cleft lip with or without cleft palate to provide information regarding treatment protocols and outcomes to the parent. Therefore, surgeons should also be able to identify the abnormal details from the images, in order to setup proper treatment planning after the birth of the fetus. We report two cases of cleft lip with or without cleft palate that the official readings of prenatal ultrasonography were inconsistent with the actual facial structure identified after birth. Also, critical and practical points in fetal ultrasonographic diagnosis are to be discussed.

Estimation of speech feature vectors and enhancement of speech recognition performance using lip information (입술정보를 이용한 음성 특징 파라미터 추정 및 음성인식 성능향상)

  • Min So-Hee;Kim Jin-Young;Choi Seung-Ho
    • MALSORI
    • /
    • no.44
    • /
    • pp.83-92
    • /
    • 2002
  • Speech recognition performance is severly degraded under noisy envrionments. One approach to cope with this problem is audio-visual speech recognition. In this paper, we discuss the experiment results of bimodal speech recongition based on enhanced speech feature vectors using lip information. We try various kinds of speech features as like linear predicion coefficient, cepstrum, log area ratio and etc for transforming lip information into speech parameters. The experimental results show that the cepstrum parameter is the best feature in the point of reconition rate. Also, we present the desirable weighting values of audio and visual informations depending on signal-to-noiso ratio.

  • PDF