• Title/Summary/Keyword: lip information

Search Result 195, Processing Time 0.024 seconds

Geometric Correction of Lips Using Lip Information (입술정보를 이용한 입술모양의 기하학적 보정)

  • 황동국;박희정;전병민
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.834-841
    • /
    • 2004
  • There can be lips transformed geometrically in the lip images according to the location or the pose of camera and speaker. This transformation of the lip images changes geometric information of original lip phases. Therefore, for enhancing global lip information by using partial information of lips to correct lip phases transformed geometrically, in this paper we propose a method that can geometrically correct lips. The method is composed of two steps - the feature-deciding step and the correcting step. In the former, it is for us to extract key points and features of source image according to the its lip model and to create that of target image according to the its lip model. In the latter, we decide mapping relation after partition a source and target image based on information extracted in the previous step into each 4 regions. and then, after mapping, we unite corrected sub-images to a result image. As experiment image, we use fames that contain pronunciation on short vowels of the Korean language and use lip symmetry for evaluating the proposed algorithm. In experiment result, the correcting rate of the lower lip than the upper lip and that of lips moving largely than little was highly enhanced.

Extraction of Lip Region using Chromaticity Transformation and Fuzzy Clustering (색도 변환과 퍼지 클러스터링을 이용한 입술영역 추출)

  • Kim, Jeong Yeop
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.7
    • /
    • pp.806-817
    • /
    • 2014
  • The extraction of lip region is essential to Lip Reading, which is a field of image processing to get some meaningful information by the analysis of lip movement from human face image. Many conventional methods to extract lip region are proposed. One is getting the position of lip by using geometric face structure. The other discriminates lip and skin regions by using color information only. The former is more complex than the latter, however it can analyze black and white image also. The latter is very simple compared to the former, however it is very difficult to discriminate lip and skin regions because of close similarity between these two regions. And also, the accuracy is relatively low compared to the former. Conventional analysis of color coordinate systems are mostly based on specific extraction scheme for lip regions rather than coordinate system itself. In this paper, the method for selection of effective color coordinate system and chromaticity transformation to discriminate these two lip and skin region are proposed.

Lip Recognition Using Active Shape Model and Shape-Based Weighted Vector (능동적 형태 모델과 가중치 벡터를 이용한 입술 인식)

  • 장경식
    • Journal of Intelligence and Information Systems
    • /
    • v.8 no.1
    • /
    • pp.75-85
    • /
    • 2002
  • In this paper, we propose an efficient method for recognizing lip. Lip is localized by using the shape of lip and the pixel values around lip contour. The shape of lip is represented by a statistically based active shape model which learns typical lip shape from a training set. Because this model is affected by the initial position, we use a boundary between upper and lower lip as initial position for searching lip. The boundary is localized by using a weighted vector based on lip's shape. The experiments have been performed for many images, and show very encouraging result.

  • PDF

Clinical study of lip balm containing propolis extract and lip applying LED device (프로폴리스 추출물이 함유된 립밤과 LED 기기를 적용한 입술 임상 연구)

  • Moon, Ji-Sun
    • Journal of Convergence for Information Technology
    • /
    • v.12 no.5
    • /
    • pp.225-236
    • /
    • 2022
  • The purpose of this study is to provide clinical information and quantitative data through clinical research on lip balm containing propolis extract and lip application with LED device. Participants in this study were selected as test products for women aged 19 to 50 in Seoul and Gyeonggi Province, and to investigate the effects of LED devices and lip creams on the lips to improve lip elasticity, lip moisture, lip keratin, and lip moisture lasting for 12 hours. Before using the product, immediately after use, after 6 hours of use, after 12 hours of use, and after 2 weeks of use, the use test items are measured and the safety, efficacy, and preference survey of the product is analyzed. The results derived through a series of research procedures are as follows. It has been shown to help lip elasticity, lip moisture, lip keratin improvement, and lip moisture lasting for 12 hours when used once or 2 weeks.

Lip and Voice Synchronization Using Visual Attention (시각적 어텐션을 활용한 입술과 목소리의 동기화 연구)

  • Dongryun Yoon;Hyeonjoong Cho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.166-173
    • /
    • 2024
  • This study explores lip-sync detection, focusing on the synchronization between lip movements and voices in videos. Typically, lip-sync detection techniques involve cropping the facial area of a given video, utilizing the lower half of the cropped box as input for the visual encoder to extract visual features. To enhance the emphasis on the articulatory region of lips for more accurate lip-sync detection, we propose utilizing a pre-trained visual attention-based encoder. The Visual Transformer Pooling (VTP) module is employed as the visual encoder, originally designed for the lip-reading task, predicting the script based solely on visual information without audio. Our experimental results demonstrate that, despite having fewer learning parameters, our proposed method outperforms the latest model, VocaList, on the LRS2 dataset, achieving a lip-sync detection accuracy of 94.5% based on five context frames. Moreover, our approach exhibits an approximately 8% superiority over VocaList in lip-sync detection accuracy, even on an untrained dataset, Acappella.

A Study on Lip Detection based on Eye Localization for Visual Speech Recognition in Mobile Environment (모바일 환경에서의 시각 음성인식을 위한 눈 정위 기반 입술 탐지에 대한 연구)

  • Gyu, Song-Min;Pham, Thanh Trung;Kim, Jin-Young;Taek, Hwang-Sung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.478-484
    • /
    • 2009
  • Automatic speech recognition(ASR) is attractive technique in trend these day that seek convenient life. Although many approaches have been proposed for ASR but the performance is still not good in noisy environment. Now-a-days in the state of art in speech recognition, ASR uses not only the audio information but also the visual information. In this paper, We present a novel lip detection method for visual speech recognition in mobile environment. In order to apply visual information to speech recognition, we need to extract exact lip regions. Because eye-detection is more easy than lip-detection, we firstly detect positions of left and right eyes, then locate lip region roughly. After that we apply K-means clustering technique to devide that region into groups, than two lip corners and lip center are detected by choosing biggest one among clustered groups. Finally, we have shown the effectiveness of the proposed method through the experiments based on samsung AVSR database.

Real-time Lip Region Detection for Lipreadingin Mobile Device (모바일 장치에서의 립리딩을 위한 실시간 입술 영역 검출)

  • Kim, Young-Un;Kang, Sun-Kyung;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.39-46
    • /
    • 2009
  • Many lip region detection methods have been developed in PC environment. But the existing methods are difficult to run on real-time in resource limited mobile devices. To solve the problem, this paper proposes a real-time lip region detection method for lipreading in Mobile device. It detects face region by using adaptive face color information. After that, it detects lip region by using geometrical relation between eyes and lips. The proposed method is implemented in a smart phone with Intel PXA 270 embedded processor and 386MB memory. Experimental results show that the proposed method runs at the speed 9.5 frame/see and the correct detection rate was 98.8% for 574 images.

A Lip Detection Algorithm Using Color Clustering (색상 군집화를 이용한 입술탐지 알고리즘)

  • Jeong, Jongmyeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.3
    • /
    • pp.37-43
    • /
    • 2014
  • In this paper, we propose a robust lip detection algorithm using color clustering. At first, we adopt AdaBoost algorithm to extract facial region and convert facial region into Lab color space. Because a and b components in Lab color space are known as that they could well express lip color and its complementary color, we use a and b component as the features for color clustering. The nearest neighbour clustering algorithm is applied to separate the skin region from the facial region and K-Means color clustering is applied to extract lip-candidate region. Then geometric characteristics are used to extract final lip region. The proposed algorithm can detect lip region robustly which has been shown by experimental results.

Speech Enhancement Using Lip Information and SFM (입술정보 및 SFM을 이용한 음성의 음질향상알고리듬)

  • Baek, Seong-Joon;Kim, Jin-Young
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.77-84
    • /
    • 2003
  • In this research, we seek the beginning of the speech and detect the stationary speech region using lip information. Performing running average of the estimated speech signal in the stationary region, we reduce the effect of musical noise which is inherent to the conventional MlMSE (Minimum Mean Square Error) speech enhancement algorithm. In addition to it, SFM (Spectral Flatness Measure) is incorporated to reduce the speech signal estimation error due to speaking habit and some lacking lip information. The proposed algorithm with Wiener filtering shows the superior performance to the conventional methods according to MOS (Mean Opinion Score) test.

  • PDF

Lip Detection using Color Distribution and Support Vector Machine for Visual Feature Extraction of Bimodal Speech Recognition System (바이모달 음성인식기의 시각 특징 추출을 위한 색상 분석자 SVM을 이용한 입술 위치 검출)

  • 정지년;양현승
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.403-410
    • /
    • 2004
  • Bimodal speech recognition systems have been proposed for enhancing recognition rate of ASR under noisy environments. Visual feature extraction is very important to develop these systems. To extract visual features, it is necessary to detect exact lip position. This paper proposed the method that detects a lip position using color similarity model and SVM. Face/Lip color distribution is teamed and the initial lip position is found by using that. The exact lip position is detected by scanning neighbor area with SVM. By experiments, it is shown that this method detects lip position exactly and fast.