• Title/Summary/Keyword: 입술 검출

Search Result 51, Processing Time 0.027 seconds

Lip Detection from Real-time Image (실시간 영상으로부터 입술 검출에 관한 연구)

  • Kim, Jong-Su;Hahn, Sang-Il;Seo, Bo-Kug;Cha, Hyung-Tai
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.11a
    • /
    • pp.125-128
    • /
    • 2009
  • 본 논문에서는 실시간 영상으로부터 입술 영역 검출 방법을 제안한다. 제안하는 방법은 영상으로부터 피부색 범위의 검출을 통하여 불필요한 잡음을 제거한 후 Harr-like 특징을 이용하여 얼굴을 검출한다. 다음 검출된 얼굴 영역으로부터 얼굴의 기하학적 정보를 이용하여 입술 후보 영역을 분리한 후 제안하는 Cb, Cr를 가지고 입술색 범위 검출해 낸다. 최종적으로 검출된 입술색 범위 영역에 Haar-like 특징을 다시 한번 적용하므로써 보다 정확한 입술 영역을 검출해낸다. 본 논문에서 제안한 알고리즘을 실험한 결과 기존의 알고리즘보다 검출률이 높았으며, 적용범위가 더 넓음을 실험을 통해 확인할 수 있었다.

  • PDF

Lip Detection using Color Distribution and Support Vector Machine for Visual Feature Extraction of Bimodal Speech Recognition System (바이모달 음성인식기의 시각 특징 추출을 위한 색상 분석자 SVM을 이용한 입술 위치 검출)

  • 정지년;양현승
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.403-410
    • /
    • 2004
  • Bimodal speech recognition systems have been proposed for enhancing recognition rate of ASR under noisy environments. Visual feature extraction is very important to develop these systems. To extract visual features, it is necessary to detect exact lip position. This paper proposed the method that detects a lip position using color similarity model and SVM. Face/Lip color distribution is teamed and the initial lip position is found by using that. The exact lip position is detected by scanning neighbor area with SVM. By experiments, it is shown that this method detects lip position exactly and fast.

A Study on Enhancing the Performance of Detecting Lip Feature Points for Facial Expression Recognition Based on AAM (AAM 기반 얼굴 표정 인식을 위한 입술 특징점 검출 성능 향상 연구)

  • Han, Eun-Jung;Kang, Byung-Jun;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.299-308
    • /
    • 2009
  • AAM(Active Appearance Model) is an algorithm to extract face feature points with statistical models of shape and texture information based on PCA(Principal Component Analysis). This method is widely used for face recognition, face modeling and expression recognition. However, the detection performance of AAM algorithm is sensitive to initial value and the AAM method has the problem that detection error is increased when an input image is quite different from training data. Especially, the algorithm shows high accuracy in case of closed lips but the detection error is increased in case of opened lips and deformed lips according to the facial expression of user. To solve these problems, we propose the improved AAM algorithm using lip feature points which is extracted based on a new lip detection algorithm. In this paper, we select a searching region based on the face feature points which are detected by AAM algorithm. And lip corner points are extracted by using Canny edge detection and histogram projection method in the selected searching region. Then, lip region is accurately detected by combining color and edge information of lip in the searching region which is adjusted based on the position of the detected lip corners. Based on that, the accuracy and processing speed of lip detection are improved. Experimental results showed that the RMS(Root Mean Square) error of the proposed method was reduced as much as 4.21 pixels compared to that only using AAM algorithm.

Real-time Lip Region Detection for Lipreadingin Mobile Device (모바일 장치에서의 립리딩을 위한 실시간 입술 영역 검출)

  • Kim, Young-Un;Kang, Sun-Kyung;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.39-46
    • /
    • 2009
  • Many lip region detection methods have been developed in PC environment. But the existing methods are difficult to run on real-time in resource limited mobile devices. To solve the problem, this paper proposes a real-time lip region detection method for lipreading in Mobile device. It detects face region by using adaptive face color information. After that, it detects lip region by using geometrical relation between eyes and lips. The proposed method is implemented in a smart phone with Intel PXA 270 embedded processor and 386MB memory. Experimental results show that the proposed method runs at the speed 9.5 frame/see and the correct detection rate was 98.8% for 574 images.

Real Time Speaker Close-Up and Tracking System Using the Lip Varying Informations (입술 움직임 변화량을 이용한 실시간 화자의 클로즈업 및 트레킹 시스템 구현)

  • 양운모;장언동;윤태승;곽내정;안재형
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2002.05d
    • /
    • pp.547-552
    • /
    • 2002
  • 본 논문에서는 다수의 사람이 존재하는 입력영상에서 입술 움직임 정보를 이용한 실시간 화자의 클로즈업(close-up) 시스템을 구현한다. 칼라 CCD 카메라를 통해 입력되는 동영상에서 화자를 검출한 후 입술 움직임 정보를 이용하여 다른 한 대의 카메라로 화자를 클로즈업한다. 구현된 시스템은 얼굴색 정보와 형태 정보를 이용하여 각 사람의 얼굴 및 입술 영역을 검출한 후, 입술 영역 변화량을 이용하여 화자를 검출한다. 검출된 화자를 클로즈업하기 위하여 PTZ(Pan/Tilt/Zoom) 카메라를 사용하였으며, RS-232C 시리얼 포트를 이용하여 카메라를 제어한다. 실험결과 3인 이상의 입력 동영상에서 정확하게 화자를 검출할 수 있으며, 움직이는 화자의 얼굴 트레킹이 가능하다.

  • PDF

Real Time Lip Reading System Implementation in Embedded Environment (임베디드 환경에서의 실시간 립리딩 시스템 구현)

  • Kim, Young-Un;Kang, Sun-Kyung;Jung, Sung-Tae
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.227-232
    • /
    • 2010
  • This paper proposes the real time lip reading method in the embedded environment. The embedded environment has the limited sources to use compared to existing PC environment, so it is hard to drive the lip reading system with existing PC environment in the embedded environment in real time. To solve the problem, this paper suggests detection methods of lip region, feature extraction of lips, and awareness methods of phonetic words suitable to the embedded environment. First, it detects the face region by using face color information to find out the accurate lip region and then detects the exact lip region by finding the position of both eyes from the detected face region and using the geometric relations. To detect strong features of lighting variables by the changing surroundings, histogram matching, lip folding, and RASTA filter were applied, and the properties extracted by using the principal component analysis(PCA) were used for recognition. The result of the test has shown the processing speed between 1.15 and 2.35 sec. according to vocalizations in the embedded environment of CPU 806Mhz, RAM 128MB specifications and obtained 77% of recognition as 139 among 180 words were recognized.

Voice Activity Detection Using Ellipse Fitting of the Oral Cavity Region (구강 영역에 대한 타원 근사법을 이용한 음성 구간 검출법)

  • Ryu, Jewoong;Choo, Sung Kwon;Kim, Gibak;Cho, Namik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.07a
    • /
    • pp.271-274
    • /
    • 2012
  • 음성 신호처리에서 많이 쓰이는 음성구간 검출은 주로 음향신호의 분석을 통하여 음향 신호에 음성이 존재하는지 여부를 판단한다. 그러나 음향신호를 이용한 방법은 음성 또는 비음성 잡음이나 주위 음향 환경에 의하여 성능이 결정된다는 단점이 있다. 음향 환경 변화에 강인한 음성구간 검출을 수행하기 위하여, 영상정보를 이용한 음성구간 검출 방법들이 최근에 연구되어 왔는데 기존 방법들은 입술 모양의 변화를 추정하기 위하여 입술 모델 등을 이용하거나 구강(oral cavity) 영역에 해당하는 픽셀 수의 변화를 이용하여 음성 구간을 검출하였다. 위 방법들은 입술의 모양을 추정하는 데 복잡한 계산이 필요하거나, 입술 모양 추정 없이 구강 영역픽셀 수만 이용하기 때문에 다소 정확도가 떨어진다는 단점이 있다. 본 논문에서는, 입술 모양의 변화를 추정하기 위해 밖으로 드러나는 구강 영역의 모양을 타원 근사법으로 추정하고, 타원의 넓이와 높이의 변화를 이용하여 음성 구간을 검출하는 방법을 제안하였다. 비교 실험 결과, 제안하는 방법은 구강영역 픽셀 수의 변화만 이용하는 방법에 비해 우수한 성능을 보임을 확인할 수 있었다.

  • PDF

Lips Detection by Probability Map Based Genetic Algorithm (확률맵 기반 유전자 알고리즘에 의한 입술영역 검출)

  • Hwang Dong-Guk;Kim Tae-Ick;Park Cheon-Joo;Jun Byung-Min;Park Hee-Jung
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.4
    • /
    • pp.79-87
    • /
    • 2004
  • In this paper, we propose a probability map based genetic algorithm to detect lips from portrait image. The existing genetic algorithm used to get an optimal solution is modified in order to get multiple optimal solutions for lips detection. Each individual consists of two chromosomes to represent coordinates x, y in space. Also the algorithm introduce a preserving zone in the population, a modified uniform crossover, a selection without individual duplication. Using probability map of H, 5 components, the proposed algorithm has adaptability in the segmentation of objects with similar colors. In experiments, we analyzed relationships of primary parameters and found that the algorithm can apply to the detection of other ROIs easily

  • PDF

Real Time Speaker Close-Up System using The Lip Motion Informations (입술 움직임 정보를 이용한 실시간 화자 클로즈업 시스템 구현)

  • 권혁봉;장언동;윤태승;안재형
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.6
    • /
    • pp.510-517
    • /
    • 2001
  • In this paper, we implement a real time speaker close-up system using lip motion information from input images having some people. After detecting a speaker from input moving pictures through one color CCD camera, the other camera closes up the speaker by using lip motion information. The implemented system detects a face and lip area of each person by means of a facial color and a morphological information, and then finds out a speaker by using lip area variation. A PTZ(Pan/Tilt/Zoom) camera is used in order to close up the detected speaker and it is controlled by RS-232C serial port. Consequently, we can exactly detect a speaker in input moving pictures including more than three people.

  • PDF

A Method of Eye and Lip Region Detection using Faster R-CNN in Face Image (초고속 R-CNN을 이용한 얼굴영상에서 눈 및 입술영역 검출방법)

  • Lee, Jeong-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.8
    • /
    • pp.1-8
    • /
    • 2018
  • In the field of biometric security such as face and iris recognition, it is essential to extract facial features such as eyes and lips. In this paper, we have studied a method of detecting eye and lip region in face image using faster R-CNN. The faster R-CNN is an object detection method using deep running and is well known to have superior performance compared to the conventional feature-based method. In this paper, feature maps are extracted by applying convolution, linear rectification process, and max pooling process to facial images in order. The RPN(region proposal network) is learned using the feature map to detect the region proposal. Then, eye and lip detector are learned by using the region proposal and feature map. In order to examine the performance of the proposed method, we experimented with 800 face images of Korean men and women. We used 480 images for the learning phase and 320 images for the test one. Computer simulation showed that the average precision of eye and lip region detection for 50 epoch cases is 97.7% and 91.0%, respectively.