• Title/Summary/Keyword: 입술탐지

Search Result 7, Processing Time 0.021 seconds

Lip Detection Algorithm Using Color Clustering (색상 군집화를 이용한 입술탐지 알고리즘)

  • Jeong, Jongmyeon;Choi, Jiyun;Seo, Ji Hyuk;Lee, Se Jun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.07a
    • /
    • pp.277-278
    • /
    • 2012
  • 본 논문에서는 색상 군집화를 이용한 입술탐지 알고리즘을 제안한다. 이를 위해 이미 많이 알려져 있는 AdaBoost를 이용한 얼굴탐지를 수행한다. 탐지된 얼굴영역에 Lab 컬러시스템을 적용 시킨 후 입술픽셀의 특징에 따른 색상 마커를 사용하여 피부영역을 추출한다. 추출된 피부영역에 대하여 K-means 색상 군집화를 통해 입술영역을 추출한다. 그리고 실험을 통해 입술탐지 결과를 확인하였다.

  • PDF

A Lip Detection Algorithm Using Color Clustering (색상 군집화를 이용한 입술탐지 알고리즘)

  • Jeong, Jongmyeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.3
    • /
    • pp.37-43
    • /
    • 2014
  • In this paper, we propose a robust lip detection algorithm using color clustering. At first, we adopt AdaBoost algorithm to extract facial region and convert facial region into Lab color space. Because a and b components in Lab color space are known as that they could well express lip color and its complementary color, we use a and b component as the features for color clustering. The nearest neighbour clustering algorithm is applied to separate the skin region from the facial region and K-Means color clustering is applied to extract lip-candidate region. Then geometric characteristics are used to extract final lip region. The proposed algorithm can detect lip region robustly which has been shown by experimental results.

A Study on Lip Detection based on Eye Localization for Visual Speech Recognition in Mobile Environment (모바일 환경에서의 시각 음성인식을 위한 눈 정위 기반 입술 탐지에 대한 연구)

  • Gyu, Song-Min;Pham, Thanh Trung;Kim, Jin-Young;Taek, Hwang-Sung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.478-484
    • /
    • 2009
  • Automatic speech recognition(ASR) is attractive technique in trend these day that seek convenient life. Although many approaches have been proposed for ASR but the performance is still not good in noisy environment. Now-a-days in the state of art in speech recognition, ASR uses not only the audio information but also the visual information. In this paper, We present a novel lip detection method for visual speech recognition in mobile environment. In order to apply visual information to speech recognition, we need to extract exact lip regions. Because eye-detection is more easy than lip-detection, we firstly detect positions of left and right eyes, then locate lip region roughly. After that we apply K-means clustering technique to devide that region into groups, than two lip corners and lip center are detected by choosing biggest one among clustered groups. Finally, we have shown the effectiveness of the proposed method through the experiments based on samsung AVSR database.

Lip and Voice Synchronization Using Visual Attention (시각적 어텐션을 활용한 입술과 목소리의 동기화 연구)

  • Dongryun Yoon;Hyeonjoong Cho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.166-173
    • /
    • 2024
  • This study explores lip-sync detection, focusing on the synchronization between lip movements and voices in videos. Typically, lip-sync detection techniques involve cropping the facial area of a given video, utilizing the lower half of the cropped box as input for the visual encoder to extract visual features. To enhance the emphasis on the articulatory region of lips for more accurate lip-sync detection, we propose utilizing a pre-trained visual attention-based encoder. The Visual Transformer Pooling (VTP) module is employed as the visual encoder, originally designed for the lip-reading task, predicting the script based solely on visual information without audio. Our experimental results demonstrate that, despite having fewer learning parameters, our proposed method outperforms the latest model, VocaList, on the LRS2 dataset, achieving a lip-sync detection accuracy of 94.5% based on five context frames. Moreover, our approach exhibits an approximately 8% superiority over VocaList in lip-sync detection accuracy, even on an untrained dataset, Acappella.

RoI Detection Method for Improving Lipreading Reading in Speech Recognition Systems (음성인식 시스템의 입 모양 인식개선을 위한 관심영역 추출 방법)

  • Jae-Hyeok Han;Mi-Hye Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.299-302
    • /
    • 2023
  • 입 모양 인식은 음성인식의 중요한 부분 중 하나로 이를 개선하기위한 다양한 연구가 진행되어 왔다. 기존의 연구에서는 주로 입술주변 영역을 관찰하고 인식하는데 초점을 두었으나, 본 논문은 음성인식 시스템에서 기존의 입술영역과 함께 입술, 턱, 뺨 등 다른 관심 영역을 고려하여 음성인식 시스템의 입모양 인식 성능을 비교하였다. 입 모양 인식의 관심 영역을 자동으로 검출하기 위해 객체 탐지 인공신경망을 사용하며, 이를 통해 다양한 관심영역을 실험하였다. 실험 결과 입술영역만 포함하는 ROI 에 대한 결과가 기존의 93.92%의 평균 인식률보다 높은 97.36%로 가장 높은 성능을 나타내었다.

A Study on Extraction of Skin Region and Lip Using Skin Color of Eye Zone (눈 주위의 피부색을 이용한 피부영역검출과 입술검출에 관한 연구)

  • Park, Young-Jae;Jang, Seok-Woo;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.19-30
    • /
    • 2009
  • In this paper, We propose a method with which we can detect facial components and face in input image. We use eye map and mouth map to detect facial components using eyes and mouth. First, We find out eye zone, and second, We find out color value distribution of skin region using the color around the eye zone. Skin region have characteristic distribution in YCbCr color space. By using it, we separate the skin region and background area. We find out the color value distribution of the extracted skin region and extract around the region. Then, detect mouth using mouthmap from extracted skin region. Proposed method is better than traditional method the reason for it comes good result with accurate mouth region.

Region of Interest Extraction and Bilinear Interpolation Application for Preprocessing of Lipreading Systems (입 모양 인식 시스템 전처리를 위한 관심 영역 추출과 이중 선형 보간법 적용)

  • Jae Hyeok Han;Yong Ki Kim;Mi Hye Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.189-198
    • /
    • 2024
  • Lipreading is one of the important parts of speech recognition, and several studies have been conducted to improve the performance of lipreading in lipreading systems for speech recognition. Recent studies have used method to modify the model architecture of lipreading system to improve recognition performance. Unlike previous research that improve recognition performance by modifying model architecture, we aim to improve recognition performance without any change in model architecture. In order to improve the recognition performance without modifying the model architecture, we refer to the cues used in human lipreading and set other regions such as chin and cheeks as regions of interest along with the lip region, which is the existing region of interest of lipreading systems, and compare the recognition rate of each region of interest to propose the highest performing region of interest In addition, assuming that the difference in normalization results caused by the difference in interpolation method during the process of normalizing the size of the region of interest affects the recognition performance, we interpolate the same region of interest using nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation, and compare the recognition rate of each interpolation method to propose the best performing interpolation method. Each region of interest was detected by training an object detection neural network, and dynamic time warping templates were generated by normalizing each region of interest, extracting and combining features, and mapping the dimensionality reduction of the combined features into a low-dimensional space. The recognition rate was evaluated by comparing the distance between the generated dynamic time warping templates and the data mapped to the low-dimensional space. In the comparison of regions of interest, the result of the region of interest containing only the lip region showed an average recognition rate of 97.36%, which is 3.44% higher than the average recognition rate of 93.92% in the previous study, and in the comparison of interpolation methods, the bilinear interpolation method performed 97.36%, which is 14.65% higher than the nearest neighbor interpolation method and 5.55% higher than the bicubic interpolation method. The code used in this study can be found a https://github.com/haraisi2/Lipreading-Systems.