• Title/Summary/Keyword: 입술의 위치

Search Result 62, Processing Time 0.029 seconds

Lip Detection using Color Distribution and Support Vector Machine for Visual Feature Extraction of Bimodal Speech Recognition System (바이모달 음성인식기의 시각 특징 추출을 위한 색상 분석자 SVM을 이용한 입술 위치 검출)

  • 정지년;양현승
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.403-410
    • /
    • 2004
  • Bimodal speech recognition systems have been proposed for enhancing recognition rate of ASR under noisy environments. Visual feature extraction is very important to develop these systems. To extract visual features, it is necessary to detect exact lip position. This paper proposed the method that detects a lip position using color similarity model and SVM. Face/Lip color distribution is teamed and the initial lip position is found by using that. The exact lip position is detected by scanning neighbor area with SVM. By experiments, it is shown that this method detects lip position exactly and fast.

Lip Recognition Using Active Shape Model and Shape-Based Weighted Vector (능동적 형태 모델과 가중치 벡터를 이용한 입술 인식)

  • 장경식
    • Journal of Intelligence and Information Systems
    • /
    • v.8 no.1
    • /
    • pp.75-85
    • /
    • 2002
  • In this paper, we propose an efficient method for recognizing lip. Lip is localized by using the shape of lip and the pixel values around lip contour. The shape of lip is represented by a statistically based active shape model which learns typical lip shape from a training set. Because this model is affected by the initial position, we use a boundary between upper and lower lip as initial position for searching lip. The boundary is localized by using a weighted vector based on lip's shape. The experiments have been performed for many images, and show very encouraging result.

  • PDF

Lip Shape Representation and Lip Boundary Detection Using Mixture Model of Shape (형태계수의 Mixture Model을 이용한 입술 형태 표현과 입술 경계선 추출)

  • Jang Kyung Shik;Lee Imgeun
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1531-1539
    • /
    • 2004
  • In this paper, we propose an efficient method for locating human lips. Based on Point Distribution Model and Principle Component Analysis, a lip shape model is built. Lip boundary model is represented based on the concatenated gray level distribution model. We calculate the distribution of shape parameters using Gaussian mixture. The problem to locate lip is simplified as the minimization problem of matching object function. The Down Hill Simplex Algorithm is used for the minimization with Gaussian Mixture for setting initial condition and refining estimate of lip shape parameter, which can refrain iteration from converging to local minima. The experiments have been performed for many images, and show very encouraging result.

  • PDF

Lip Shape Model and Lip Localization using Shape Clustering (형태 군집화를 이용한 입술 형태 모델과 입술 추출)

  • 장경식
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.6
    • /
    • pp.1000-1007
    • /
    • 2003
  • In this paper, we propose an efficient method for locating lip. The lip shape is represented as a set of points based on Point Distribution Model. We use the Isodata clustering algorithm to find clusters for all training data. For each cluster, a lip shape model is calculated using principle component analysis. For all training data, a lip boundary model is calculated based on the pixel values around the lip boundary. To decide whether a recognition result is correct, we use a cost function based on the lip boundary model. Because of using different models according to the lip shapes, our method can localize correctly the flu far from the mean shape. The experiments have been performed for many images, and show correct recognition rate of 92%.

  • PDF

Level of perception of changed lip protrusion and asymmetry of the lower facial height (하안면부에서 입술의 돌출 정도와 안면 비대칭의 인지도에 관한 연구)

  • Kim, Kyu-Sun;Kim, Young-Jin;Lee, Keun-Hye;Kook, Yoon-Ah;Kim, Young-Ho
    • The korean journal of orthodontics
    • /
    • v.36 no.6
    • /
    • pp.434-441
    • /
    • 2006
  • Objective: While one of the most prevailing motivations for seeking orthodontic treatment is to achieve good facial esthetics, understanding the level of a person's perception to the changes that have occurred on the face after orthodontic treatment is critical to the process of orthodontic diagnosis and treatment planning. Methods: 40 students attending art school participated in determining the level of their perception of changed lip position and facial asymmetry. Computer-graphic frontal face and facial profile photographs with balanced proportions were used to evaluate the level of a participant's perception of the changes in facia! asymmetry and in lip position. Results: Change of lip position over 2 mm and over a 3 mm change of facial asymmetry was perceived significantly. Conclusion: The results indicated that at least a 2 mm change of lip position was needed to be perceived after orthodontic treatment. The level of perception of the change in facial asymmetry was lower than that of the change in lip position. Information about facial changes given prior to the evaluation enhanced the level of perception.

Geometric Correction of Lips Using Lip Information (입술정보를 이용한 입술모양의 기하학적 보정)

  • 황동국;박희정;전병민
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.834-841
    • /
    • 2004
  • There can be lips transformed geometrically in the lip images according to the location or the pose of camera and speaker. This transformation of the lip images changes geometric information of original lip phases. Therefore, for enhancing global lip information by using partial information of lips to correct lip phases transformed geometrically, in this paper we propose a method that can geometrically correct lips. The method is composed of two steps - the feature-deciding step and the correcting step. In the former, it is for us to extract key points and features of source image according to the its lip model and to create that of target image according to the its lip model. In the latter, we decide mapping relation after partition a source and target image based on information extracted in the previous step into each 4 regions. and then, after mapping, we unite corrected sub-images to a result image. As experiment image, we use fames that contain pronunciation on short vowels of the Korean language and use lip symmetry for evaluating the proposed algorithm. In experiment result, the correcting rate of the lower lip than the upper lip and that of lips moving largely than little was highly enhanced.

Lip Contour Extraction Using Active Shape Model Based on Energy Minimization (에너지 최소화 기반 능동형태 모델을 이용한 입술 윤곽선 추출)

  • Jang, Kyung-Shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.10
    • /
    • pp.1891-1896
    • /
    • 2006
  • In this paper, we propose an improved Active Shape Model for extracting lip contour. Lip deformation is modeled by a statistically deformable model based Active Shape Model. Because each point is moved independently using local profile information in Active Shape Model, many error may happen. To use a global information, we define an energy function similar to an energy function in Active Contour Model, and points are moved to positions at which the total energy is minimized. The experiments have been performed for many lip images of Tulip 1 database, and show that our method extracts lip shape than a traditional ASM more exactly.

A Study on Lip Detection based on Eye Localization for Visual Speech Recognition in Mobile Environment (모바일 환경에서의 시각 음성인식을 위한 눈 정위 기반 입술 탐지에 대한 연구)

  • Gyu, Song-Min;Pham, Thanh Trung;Kim, Jin-Young;Taek, Hwang-Sung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.478-484
    • /
    • 2009
  • Automatic speech recognition(ASR) is attractive technique in trend these day that seek convenient life. Although many approaches have been proposed for ASR but the performance is still not good in noisy environment. Now-a-days in the state of art in speech recognition, ASR uses not only the audio information but also the visual information. In this paper, We present a novel lip detection method for visual speech recognition in mobile environment. In order to apply visual information to speech recognition, we need to extract exact lip regions. Because eye-detection is more easy than lip-detection, we firstly detect positions of left and right eyes, then locate lip region roughly. After that we apply K-means clustering technique to devide that region into groups, than two lip corners and lip center are detected by choosing biggest one among clustered groups. Finally, we have shown the effectiveness of the proposed method through the experiments based on samsung AVSR database.

Different Types of Encoding and Processing in Auditory Sensory Memory according to Stimulus Modality (자극양식에 따른 청감각기억에서의 여러가지 부호화방식과 처리방식)

  • Kim, Jeong-Hwan;Lee, Man-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.9 no.4
    • /
    • pp.77-85
    • /
    • 1990
  • This study investigated Greene and Corwder(1984)'s modified PAS model, according to which, in a short-term memory recall task, the recency and suffix effects existing in auditory and visual conditions are mediated by the same mechanisms. It also investigated whether the auditory information and mouthed information are encoded by the same codes. Though the experimental manipulation of the phonological nature, the presence of differential recall effect of consonant-and vowel-varied stimuli in auditory and mouthing conditions which has been supposed to interact with the recency and suffix effects, was investigated. The result shows that differential recall effect between consonant and vowel exists only in the auditory condition, but not in the mouthing condition. Thus, this result supported Turner.

  • PDF

Real Time Lip Reading System Implementation in Embedded Environment (임베디드 환경에서의 실시간 립리딩 시스템 구현)

  • Kim, Young-Un;Kang, Sun-Kyung;Jung, Sung-Tae
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.227-232
    • /
    • 2010
  • This paper proposes the real time lip reading method in the embedded environment. The embedded environment has the limited sources to use compared to existing PC environment, so it is hard to drive the lip reading system with existing PC environment in the embedded environment in real time. To solve the problem, this paper suggests detection methods of lip region, feature extraction of lips, and awareness methods of phonetic words suitable to the embedded environment. First, it detects the face region by using face color information to find out the accurate lip region and then detects the exact lip region by finding the position of both eyes from the detected face region and using the geometric relations. To detect strong features of lighting variables by the changing surroundings, histogram matching, lip folding, and RASTA filter were applied, and the properties extracted by using the principal component analysis(PCA) were used for recognition. The result of the test has shown the processing speed between 1.15 and 2.35 sec. according to vocalizations in the embedded environment of CPU 806Mhz, RAM 128MB specifications and obtained 77% of recognition as 139 among 180 words were recognized.