• Title/Summary/Keyword: lip information

Search Result 195, Processing Time 0.028 seconds

On Pattern Kernel with Multi-Resolution Architecture for a Lip Print Recognition (구순문 인식을 위한 복수 해상도 시스템의 패턴 커널에 관한 연구)

  • 김진옥;황대준;백경석;정진현
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.12A
    • /
    • pp.2067-2073
    • /
    • 2001
  • Biometric systems are forms of technology that use unique human physical characteristics to automatically identify a person. They have sensors to pick up some physical characteristics, convert them into digital patterns, and compare them with patterns stored for individual identification. However, lip-print recognition has been less developed than recognition of other human physical attributes such as the fingerprint, voice patterns, retinal at blood vessel patterns, or the face. The lip print recognition by a CCD camera has the merit of being linked with other recognition systems such as the retinal/iris eye and the face. A new method using multi-resolution architecture is proposed to recognize a lip print from the pattern kernels. A set of pattern kernels is a function of some local lip print masks. This function converts the information from a lip print into digital data. Recognition in the multi-resolution system is more reliable than recognition in the single-resolution system. The multi-resolution architecture allows us to reduce the false recognition rate from 15% to 4.7%. This paper shows that a lip print is sufficiently used by the measurements of biometric systems.

  • PDF

Robustness of Bimodal Speech Recognition on Degradation of Lip Parameter Estimation Performance (음성인식에서 입술 파라미터 열화에 따른 견인성 연구)

  • Kim Jinyoung;Shin Dosung;Choi Seungho
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.205-208
    • /
    • 2002
  • Bimodal speech recognition based on lip reading has been studied as a representative method of speech recognition under noisy environments. There are three integration methods of speech and lip modalities as like direct identification, separate identification and dominant recording. In this paper we evaluate the robustness of lip reading methods under the assumption that lip parameters are estimated with errors. We show that the dominant recording approach is more robust than other methods with lip reading experiments. Also, a measure of lip parameter degradation is proposed. This measure can be used in the determination of weighting values of video information.

  • PDF

Lip Contour Extraction Using Active Shape Model Based on Energy Minimization (에너지 최소화 기반 능동형태 모델을 이용한 입술 윤곽선 추출)

  • Jang, Kyung-Shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.10
    • /
    • pp.1891-1896
    • /
    • 2006
  • In this paper, we propose an improved Active Shape Model for extracting lip contour. Lip deformation is modeled by a statistically deformable model based Active Shape Model. Because each point is moved independently using local profile information in Active Shape Model, many error may happen. To use a global information, we define an energy function similar to an energy function in Active Contour Model, and points are moved to positions at which the total energy is minimized. The experiments have been performed for many lip images of Tulip 1 database, and show that our method extracts lip shape than a traditional ASM more exactly.

Pupil and Lip Detection using Shape and Weighted Vector based on Shape (형태와 가중치 벡터를 이용한 눈동자와 입술 검출)

  • Jang, kyung-Shik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.5
    • /
    • pp.311-318
    • /
    • 2002
  • In this paper, we propose an efficient method for recognizing pupils and lip in a human face. Pupils are detected by a cost function, which uses features based on the eye's shape and a relation between pupil and eyebrow. The inner boundary of lip is detected by weighted vectors based on lip's shape and on the difference of gray level between lip and face skin. These vectors extract four feature points of lip : the top of the upper lip, the bottom of the lower lip, and the two corners. The experiments have been performed for many images and show very encouraging result.

Lip Shape Synthesis of the Korean Syllable for Human Interface (휴먼인터페이스를 위한 한글음절의 입모양합성)

  • 이용동;최창석;최갑석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.4
    • /
    • pp.614-623
    • /
    • 1994
  • Synthesizing speech and facial images is necessary for human interface that man and machine converse naturally as human do. The target of this paper is synthesizing the facial images. In synthesis of the facial images a three-dimensional (3-D) shape model of the face is used for realizating the facial expression variations and the lip shape variations. The various facial expressions and lip shapes harmonized with the syllables are synthesized by deforming the three-dimensional model on the basis of the facial muscular actions. Combications with the consonants and the vowels make 14.364 syllables. The vowels dominate most lip shapes but the consonants do a part of them. For determining the lip shapes, this paper investigates all the syllables and classifies the lip shapes pattern according to the vowels and the consonants. As the results, the lip shapes are classified into 8 patterns for the vowels and 2patterns for the consonants. In advance, the paper determines the synthesis rules for the classified lip shape patterns. This method permits us to obtain the natural facial image with the various facial expressions and lip shape patterns.

  • PDF

Real Time Lip Reading System Implementation in Embedded Environment (임베디드 환경에서의 실시간 립리딩 시스템 구현)

  • Kim, Young-Un;Kang, Sun-Kyung;Jung, Sung-Tae
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.227-232
    • /
    • 2010
  • This paper proposes the real time lip reading method in the embedded environment. The embedded environment has the limited sources to use compared to existing PC environment, so it is hard to drive the lip reading system with existing PC environment in the embedded environment in real time. To solve the problem, this paper suggests detection methods of lip region, feature extraction of lips, and awareness methods of phonetic words suitable to the embedded environment. First, it detects the face region by using face color information to find out the accurate lip region and then detects the exact lip region by finding the position of both eyes from the detected face region and using the geometric relations. To detect strong features of lighting variables by the changing surroundings, histogram matching, lip folding, and RASTA filter were applied, and the properties extracted by using the principal component analysis(PCA) were used for recognition. The result of the test has shown the processing speed between 1.15 and 2.35 sec. according to vocalizations in the embedded environment of CPU 806Mhz, RAM 128MB specifications and obtained 77% of recognition as 139 among 180 words were recognized.

Real Time Speaker Close-Up System using The Lip Motion Informations (입술 움직임 정보를 이용한 실시간 화자 클로즈업 시스템 구현)

  • 권혁봉;장언동;윤태승;안재형
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.6
    • /
    • pp.510-517
    • /
    • 2001
  • In this paper, we implement a real time speaker close-up system using lip motion information from input images having some people. After detecting a speaker from input moving pictures through one color CCD camera, the other camera closes up the speaker by using lip motion information. The implemented system detects a face and lip area of each person by means of a facial color and a morphological information, and then finds out a speaker by using lip area variation. A PTZ(Pan/Tilt/Zoom) camera is used in order to close up the detected speaker and it is controlled by RS-232C serial port. Consequently, we can exactly detect a speaker in input moving pictures including more than three people.

  • PDF

DTV Lip-Sync Test Using Embedded Audio-Video Time Indexed Signals (숨겨진 오디오 비디오 시간 인덱스 신호를 사용한 DTV 립싱크 테스트)

  • 한찬호;송규익
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.155-162
    • /
    • 2004
  • This paper concentrated on lip synchronization (lip sync) test for DTV with respect to audio and video signals using a finite digital bitstream In this paper, we propose a new lip sync test method which does not effect on the current program by use of the transient effect area test signals (TATS) and audio-video time index lip sync test signals (TILS).the experimental result shows that the time difference between audio and video signal can be easily measured by captured oscilloscope waveform at any time.

Performance Comparison and Verification of Lip Parameter Selection Methods in the Bimodal Speech ]Recognition System (입술 파라미터 선정에 따른 바이모달 음성인식 성능 비교 및 검증)

  • 박병구;김진영;임재열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.3
    • /
    • pp.68-72
    • /
    • 1999
  • The choice of parameters from various lip information and the robustness of extracting lip parameters play important roles in the performance of bimodal speech recognition system. In this paper, lip parameters are extracted by using an automatic extraction algorithm and inner lip parameters effect on the recognition rate more than outer lip parameters. Compared with a manual extraction algorithm, the automatic extraction method is evaluated about its robustness.

  • PDF

Lip Feature Extraction using Contrast of YCbCr (YCbCr 농도 대비를 이용한 입술특징 추출)

  • Kim, Woo-Sung;Min, Kyung-Won;Ko, Han-Seok
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.259-260
    • /
    • 2006
  • Since audio speech recognition is affected by noise in real environment, visual speech recognition is used to support speech recognition. For the visual speech recognition, this paper suggests the extraction of lip-feature using two types of image segmentation and reduced ASM. Input images are transformed to YCbCr based images and lips are segmented using the contrast of Y/Cb/Cr between lip and face. Subsequently, lip-shape model trained by PCA is placed on segmented lip region and then lip features are extracted using ASM.

  • PDF