• Title/Summary/Keyword: 립리딩

Search Result 18, Processing Time 0.025 seconds

Design & Implementation of Real-Time Lipreading System using PC Camera (PC카메라를 이용한 실시간 립리딩 시스템 설계 및 구현)

  • 이은숙;이지근;이상설;정성태
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2003.11a
    • /
    • pp.310-313
    • /
    • 2003
  • 최근 들어 립리딩은 멀티모달 인터페이스 기술의 응용분야에서 많은 관심을 모으고 있다. 동적영상을 이용한 립리딩 시스템에서 해결해야 할 주된 문제점은 상황 변화에 독립적으로 얼굴 영역과 입술 영역을 추출하고 오프라인이 아닌 실시간으로 입력된 입술 영상의 인식을 처리하여 립리딩의 사용도를 높이는 것이다. 본 논문에서는 사용자가 쉽게 사용할 수 있는 PC카메라를 사용하여 영상을 입력받아 학습과 인식을 실시간으로 처리하는 립리딩 시스템을 구현하였다. 본 논문에서는 움직임이 있는 화자의 얼굴영역과 입술영역을 컬러, 조명등의 변화에 독립적으로 추출하기 위해 HSI모델을 이용하였다. 입력 영상에서 일정한 크기의 영역에 대한 색도 히스토그램 모델을 만들어 색도 영상에 적용함으로써 얼굴영역의 확률 분포를 구하였고, Mean-Shift Algorithm을 이용하여 얼굴영역의 검출과 추적을 하였다. 특징 점 추출에는 이미지 기반 방법인 PCA 기법을 이용하였고, HMM 기반 패턴 인식을 사용하여 실시간으로 실험영상데이터에 대한 학습과 인식을 수행할 수 있었다.

  • PDF

Design of an Efficient VLSI Architecture and Verification using FPGA-implementation for HMM(Hidden Markov Model)-based Robust and Real-time Lip Reading (HMM(Hidden Markov Model) 기반의 견고한 실시간 립리딩을 위한 효율적인 VLSI 구조 설계 및 FPGA 구현을 이용한 검증)

  • Lee Chi-Geun;Kim Myung-Hun;Lee Sang-Seol;Jung Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.2 s.40
    • /
    • pp.159-167
    • /
    • 2006
  • Lipreading has been suggested as one of the methods to improve the performance of speech recognition in noisy environment. However, existing methods are developed and implemented only in software. This paper suggests a hardware design for real-time lipreading. For real-time processing and feasible implementation, we decompose the lipreading system into three parts; image acquisition module, feature vector extraction module, and recognition module. Image acquisition module capture input image by using CMOS image sensor. The feature vector extraction module extracts feature vector from the input image by using parallel block matching algorithm. The parallel block matching algorithm is coded and simulated for FPGA circuit. Recognition module uses HMM based recognition algorithm. The recognition algorithm is coded and simulated by using DSP chip. The simulation results show that a real-time lipreading system can be implemented in hardware.

  • PDF

A Study on Analysis of Variant Factors of Recognition Performance for Lip-reading at Dynamic Environment (동적 환경에서의 립리딩 인식성능저하 요인분석에 대한 연구)

  • 신도성;김진영;이주헌
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.5
    • /
    • pp.471-477
    • /
    • 2002
  • Recently, lip-reading has been studied actively as an auxiliary method of automatic speech recognition(ASR) in noisy environments. However, almost of research results were obtained based on the database constructed in indoor condition. So, we dont know how developed lip-reading algorithms are robust to dynamic variation of image. Currently we have developed a lip-reading system based on image-transform based algorithm. This system recognize 22 words and this word recognizer achieves word recognition of up to 53.54%. In this paper we present how stable the lip-reading system is in environmental variance and what the main variant factors are about dropping off in word-recognition performance. For studying lip-reading robustness we consider spatial valiance (translation, rotation, scaling) and illumination variance. Two kinds of test data are used. One Is the simulated lip image database and the other is real dynamic database captured in car environment. As a result of our experiment, we show that the spatial variance is one of degradations factors of lip reading performance. But the most important factor of degradation is not the spatial variance. The illumination variances make severe reduction of recognition rates as much as 70%. In conclusion, robust lip reading algorithms against illumination variances should be developed for using lip reading as a complementary method of ASR.

Time domain Filtering of Image for Lip-reading Enhancement (시간영역 이미지 필터링에 의한 립리딩 성능 향상)

  • Lee Jeeeun;Kim Jinyoung;Lee Joohun
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.45-48
    • /
    • 2001
  • 립리딩은 잡음 환경 하에서 음성 인식 성능을 향상을 위해 영상정보를 이용한 바이모달(bimodal)음성인식으로 연구되었다[1][2]. 그 일환으로 이미 영상정보를 이용한 립리딩은 구현되었다. 그러나 현재까지의 시스템들은 환경의 변화에 강인하지 못하다. 본 논문에서는 이미지 기반 립리딩 방법을 적용하여 입술 영역을 보다 안정적으로 찾아 성능을 향상 시켰다. 그러나 이 방법은 많은 데이터량을 처리해야 하므로 전처리 과정이 필요하다. 전처리로 입력영상을 그레이 레벨로 변환하는 방법과, 입술을 반으로 접는 방법, 그리고 주성분 분석(PCA: Principal Component Analysis)을 사용하였다. 또한 인식성능 향상을 위해 음성에서 잡음 제거나 분석$\cdot$합성에 효과적인 성능을 보이는 RASTA(Relative Spectral)필터를 적용하여 시간 영역에서의 변화가 적은 성분이나 급변하는 성분, 그 밖의 잡음 등을 제거하였다. 그 결과 $72.7\%$의 높은 인식 성능을 보였다.

  • PDF

Design & Implementation of Lipreading System using Robust Lip Area Extraction (견고한 입술 영역 추출을 이용한 립리딩 시스템 설계 및 구현)

  • 이은숙;이호근;이지근;김봉완;이상설;이용주;정성태
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2003.05b
    • /
    • pp.524-527
    • /
    • 2003
  • 최근 들어 립리딩은 멀티모달 인터페이스 기술의 응용분야에서 많은 관심을 모으고 있다. 동적 영상을 이용한 립리딩 시스템에서 해결해야 할 주된 문제점은 상황 변화에 독립적인 얼굴 영역과 입술 영역을 추출하는 것이다. 본 논문에서는 움직임이 있는 영상에서 화자의 얼굴영역과 입술영역을 컬러, 조명등의 변화에 독립적으로 추출하기 위해 HSI 모델과 블록 매칭을 이용하였고 특징 점 추출에는 이미지 기반 방법인 PCA 기법을 이용하였다. 추출된 입술 파라미터와 음성 데이터에 각각 HMM 기반 패턴 인식 방법을 개별적으로 적용하여 단어를 인식하였고 각각의 인식 결과를 가중치를 주어 합병하였다. 실험 결과에 의하면 잡음으로 음성 인식률이 낮아지는 경우에 음성인식과 립리딩을 함께 사용함으로써 전체적인 인식 결과를 향상시킬 수 있었다.

  • PDF

A Study on Lip-reading Enhancement Using Time-domain Filter (시간영역 필터를 이용한 립리딩 성능향상에 관한 연구)

  • 신도성;김진영;최승호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.5
    • /
    • pp.375-382
    • /
    • 2003
  • Lip-reading technique based on bimodal is to enhance speech recognition rate in noisy environment. It is most important to detect the correct lip-image. But it is hard to estimate stable performance in dynamic environment, because of many factors to deteriorate Lip-reading's performance. There are illumination change, speaker's pronunciation habit, versatility of lips shape and rotation or size change of lips etc. In this paper, we propose the IIR filtering in time-domain for the stable performance. It is very proper to remove the noise of speech, to enhance performance of recognition by digital filtering in time domain. While the lip-reading technique in whole lip image makes data massive, the Principal Component Analysis of pre-process allows to reduce the data quantify by detection of feature without loss of image information. For the observation performance of speech recognition using only image information, we made an experiment on recognition after choosing 22 words in available car service. We used Hidden Markov Model by speech recognition algorithm to compare this words' recognition performance. As a result, while the recognition rate of lip-reading using PCA is 64%, Time-domain filter applied to lip-reading enhances recognition rate of 72.4%.

Design and Implementation of a Real-Time Lipreading System Using PCA & HMM (PCA와 HMM을 이용한 실시간 립리딩 시스템의 설계 및 구현)

  • Lee chi-geun;Lee eun-suk;Jung sung-tae;Lee sang-seol
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1597-1609
    • /
    • 2004
  • A lot of lipreading system has been proposed to compensate the rate of speech recognition dropped in a noisy environment. Previous lipreading systems work on some specific conditions such as artificial lighting and predefined background color. In this paper, we propose a real-time lipreading system which allows the motion of a speaker and relaxes the restriction on the condition for color and lighting. The proposed system extracts face and lip region from input video sequence captured with a common PC camera and essential visual information in real-time. It recognizes utterance words by using the visual information in real-time. It uses the hue histogram model to extract face and lip region. It uses mean shift algorithm to track the face of a moving speaker. It uses PCA(Principal Component Analysis) to extract the visual information for learning and testing. Also, it uses HMM(Hidden Markov Model) as a recognition algorithm. The experimental results show that our system could get the recognition rate of 90% in case of speaker dependent lipreading and increase the rate of speech recognition up to 40~85% according to the noise level when it is combined with audio speech recognition.

  • PDF

Real Time Lip Reading System Implementation in Embedded Environment (임베디드 환경에서의 실시간 립리딩 시스템 구현)

  • Kim, Young-Un;Kang, Sun-Kyung;Jung, Sung-Tae
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.227-232
    • /
    • 2010
  • This paper proposes the real time lip reading method in the embedded environment. The embedded environment has the limited sources to use compared to existing PC environment, so it is hard to drive the lip reading system with existing PC environment in the embedded environment in real time. To solve the problem, this paper suggests detection methods of lip region, feature extraction of lips, and awareness methods of phonetic words suitable to the embedded environment. First, it detects the face region by using face color information to find out the accurate lip region and then detects the exact lip region by finding the position of both eyes from the detected face region and using the geometric relations. To detect strong features of lighting variables by the changing surroundings, histogram matching, lip folding, and RASTA filter were applied, and the properties extracted by using the principal component analysis(PCA) were used for recognition. The result of the test has shown the processing speed between 1.15 and 2.35 sec. according to vocalizations in the embedded environment of CPU 806Mhz, RAM 128MB specifications and obtained 77% of recognition as 139 among 180 words were recognized.

Real-time Lip Region Detection for Lipreadingin Mobile Device (모바일 장치에서의 립리딩을 위한 실시간 입술 영역 검출)

  • Kim, Young-Un;Kang, Sun-Kyung;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.39-46
    • /
    • 2009
  • Many lip region detection methods have been developed in PC environment. But the existing methods are difficult to run on real-time in resource limited mobile devices. To solve the problem, this paper proposes a real-time lip region detection method for lipreading in Mobile device. It detects face region by using adaptive face color information. After that, it detects lip region by using geometrical relation between eyes and lips. The proposed method is implemented in a smart phone with Intel PXA 270 embedded processor and 386MB memory. Experimental results show that the proposed method runs at the speed 9.5 frame/see and the correct detection rate was 98.8% for 574 images.

Korean Lip-Reading: Data Construction and Sentence-Level Lip-Reading (한국어 립리딩: 데이터 구축 및 문장수준 립리딩)

  • Sunyoung Cho;Soosung Yoon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.2
    • /
    • pp.167-176
    • /
    • 2024
  • Lip-reading is the task of inferring the speaker's utterance from silent video based on learning of lip movements. It is very challenging due to the inherent ambiguities present in the lip movement such as different characters that produce the same lip appearances. Recent advances in deep learning models such as Transformer and Temporal Convolutional Network have led to improve the performance of lip-reading. However, most previous works deal with English lip-reading which has limitations in directly applying to Korean lip-reading, and moreover, there is no a large scale Korean lip-reading dataset. In this paper, we introduce the first large-scale Korean lip-reading dataset with more than 120 k utterances collected from TV broadcasts containing news, documentary and drama. We also present a preprocessing method which uniformly extracts a facial region of interest and propose a transformer-based model based on grapheme unit for sentence-level Korean lip-reading. We demonstrate that our dataset and model are appropriate for Korean lip-reading through statistics of the dataset and experimental results.