• Title/Summary/Keyword: Visual speech recognition

Search Result 67, Processing Time 0.026 seconds

Design and Implementation of a Real-Time Lipreading System Using PCA & HMM (PCA와 HMM을 이용한 실시간 립리딩 시스템의 설계 및 구현)

  • Lee chi-geun;Lee eun-suk;Jung sung-tae;Lee sang-seol
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1597-1609
    • /
    • 2004
  • A lot of lipreading system has been proposed to compensate the rate of speech recognition dropped in a noisy environment. Previous lipreading systems work on some specific conditions such as artificial lighting and predefined background color. In this paper, we propose a real-time lipreading system which allows the motion of a speaker and relaxes the restriction on the condition for color and lighting. The proposed system extracts face and lip region from input video sequence captured with a common PC camera and essential visual information in real-time. It recognizes utterance words by using the visual information in real-time. It uses the hue histogram model to extract face and lip region. It uses mean shift algorithm to track the face of a moving speaker. It uses PCA(Principal Component Analysis) to extract the visual information for learning and testing. Also, it uses HMM(Hidden Markov Model) as a recognition algorithm. The experimental results show that our system could get the recognition rate of 90% in case of speaker dependent lipreading and increase the rate of speech recognition up to 40~85% according to the noise level when it is combined with audio speech recognition.

  • PDF

Ordering system for the disabled and the weak using a KIOSK with speech recognition technology (키오스크를 이용한 장애인 및 약자를 위한 음성인식 주문시스템)

  • Lee, Hyo-Jai;Hong, Changho;Cho, Sung Ho;Yoon, Chaiwon;Kim, Dongwan;Choi, Seunghwa
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.544-546
    • /
    • 2021
  • Recently, the number of unmanned stores is increasing due to COVID-19. In unmanned stores, payments are mainly made using kiosks, but some people with physical disabilities or people with disabilities who use wheelchairs are not easy to use it. Also, young children and the elderly are also having difficulty using new technologies such as kiosks as they get older. In this study, in order to compensate for these problems, we intend to design and implement a system capable of performing order by a speech recognition function as well as a visual system when a user interacts with a kiosk.

  • PDF

Visual Voice Activity Detection and Adaptive Threshold Estimation for Speech Recognition (음성인식기 성능 향상을 위한 영상기반 음성구간 검출 및 적응적 문턱값 추정)

  • Song, Taeyup;Lee, Kyungsun;Kim, Sung Soo;Lee, Jae-Won;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.4
    • /
    • pp.321-327
    • /
    • 2015
  • In this paper, we propose an algorithm for achieving robust Visual Voice Activity Detection (VVAD) for enhanced speech recognition. In conventional VVAD algorithms, the motion of lip region is found by applying an optical flow or Chaos inspired measures for detecting visual speech frames. The optical flow-based VVAD is difficult to be adopted to driving scenarios due to its computational complexity. While invariant to illumination changes, Chaos theory based VVAD method is sensitive to motion translations caused by driver's head movements. The proposed Local Variance Histogram (LVH) is robust to the pixel intensity changes from both illumination change and translation change. Hence, for improved performance in environmental changes, we adopt the novel threshold estimation using total variance change. In the experimental results, the proposed VVAD algorithm achieves robustness in various driving situations.

The Phonological and Orthographic activation in Korean Word Recognition(II) (한국어 단어 재인에서의 음운정보와 철자정보의 활성화(II))

  • Choi Wonil;Nam Kichun
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.33-36
    • /
    • 2003
  • Two experiments were conducted to support the suggestion that the same information processing was used in both input modalities, visual and auditory modality in Wonil Choi & Kichun Nam(2003)'s paper. The primed lexical decision task was performed and pseudoword prime stimuli were used. The result was that priming effect did not occur in any experimental condition. This result might be interpreted visual facilitative information and phonological inhibitory information cancelled each other.

  • PDF

Speech Recognition and Lip Shape Feature Extraction for English Vowel Pronunciation of the Hearing - Impaired Based on SVM Technique (SVM 기법에 기초한 청각장애인의 영어모음 발음을 위한 음성 인식 및 입술형태 특징 추출)

  • Lee, Kun-Min;Han, Kyung-Im;Park, Hye-Jung
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.11 no.3
    • /
    • pp.247-252
    • /
    • 2017
  • The purpose of this study is to suggest the visual teaching method for the English vowel pronunciation, especially for the hearing-impaired who mostly rely on the visual aids, based on the SVM technique. By extracting phonetic features using the SVM technique from the sounds that are hard to hear by ear, the lip shapes for each vowel were refined. The lip shape refinement for vowels is advantageous in that language learners can easily see the movement of articulators by eye, and it is helpful for learning and teaching English vowels for the hearing-impaired.

The Neighborhood Effect in Korean Visual Word Recognition (한국어 시각단어재인에서 나타나는 이웃효과)

  • Kwon, You-An;Cho, Hyae-Suk;Kim, Choong-Myung;Nam, Ki-Chun
    • MALSORI
    • /
    • no.60
    • /
    • pp.29-45
    • /
    • 2006
  • We investigated whether the first syllable plays an important role in lexical access in Korean visual word recognition. To do so, one lexical decision task (LDT) and two form primed LDT experiments examined the nature of the syllabic neighborhood effect. In Experiment 1, the syllabic neighborhood density and the syllabic neighborhood frequency was manipulated. The results showed that lexical decision latencies were only influenced by the syllabic neighborhood frequency. The purpose of experiment 2 was to confirm the results of experiment 1 with form-primed LDT task. The lexical decision latency was slower in form-related condition compared to form-unrelated condition. The effect of syllabic neighborhood density was significant only in form-related condition. This means that the first syllable plays an important role in the sub-lexical process. In Experiment 3, we conducted another form-primed LDT task manipulating the number of syllabic neighbors in words with higher frequency neighborhood. The interaction of syllabic neighborhood density and form relation was significant. This result confirmed that the words with higher frequency neighborhood are more inhibited by neighbors sharing the first syllable than words with no higher frequency neighborhood in the lexical level. These findings suggest that the first syllable is the unit of neighborhood and the unit of representation in sub-lexical representation is syllable in Korea.

  • PDF

Robustness of Lipreading against the Variations of Rotation, Translation and Scaling

  • Min, Duk-Soo;Kim, Jin-Young;Park, Seung-Ho;Kim, Ki-Jung
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.15-18
    • /
    • 2000
  • In this study, we improve the performance of a speech recognition system of visual information depending on lip movements. This paper focuses on the robustness of the word recognition system with the rotation, transition and scaling of the lip images. The different methods of lipreading have been used to estimate the stability of recognition performance. Especially, we work out the special system of the log-polar mapping, which is called Mellin transform with quasi RTS-invariant and related approaches to machine vision. The results of word recognition are reported with HMM (Hidden Markov Model) recognition system.

  • PDF

Recognition of Korean Vowels using Bayesian Classification with Mouth Shape (베이지안 분류 기반의 입 모양을 이용한 한글 모음 인식 시스템)

  • Kim, Seong-Woo;Cha, Kyung-Ae;Park, Se-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.8
    • /
    • pp.852-859
    • /
    • 2019
  • With the development of IT technology and smart devices, various applications utilizing image information are being developed. In order to provide an intuitive interface for pronunciation recognition, there is a growing need for research on pronunciation recognition using mouth feature values. In this paper, we propose a system to distinguish Korean vowel pronunciations by detecting feature points of lips region in images and applying Bayesian based learning model. The proposed system implements the recognition system based on Bayes' theorem, so that it is possible to improve the accuracy of speech recognition by accumulating input data regardless of whether it is speaker independent or dependent on small amount of learning data. Experimental results show that it is possible to effectively distinguish Korean vowels as a result of applying probability based Bayesian classification using only visual information such as mouth shape features.

Emotion Recognition of Low Resource (Sindhi) Language Using Machine Learning

  • Ahmed, Tanveer;Memon, Sajjad Ali;Hussain, Saqib;Tanwani, Amer;Sadat, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.369-376
    • /
    • 2021
  • One of the most active areas of research in the field of affective computing and signal processing is emotion recognition. This paper proposes emotion recognition of low-resource (Sindhi) language. This work's uniqueness is that it examines the emotions of languages for which there is currently no publicly accessible dataset. The proposed effort has provided a dataset named MAVDESS (Mehran Audio-Visual Dataset Mehran Audio-Visual Database of Emotional Speech in Sindhi) for the academic community of a significant Sindhi language that is mainly spoken in Pakistan; however, no generic data for such languages is accessible in machine learning except few. Furthermore, the analysis of various emotions of Sindhi language in MAVDESS has been carried out to annotate the emotions using line features such as pitch, volume, and base, as well as toolkits such as OpenSmile, Scikit-Learn, and some important classification schemes such as LR, SVC, DT, and KNN, which will be further classified and computed to the machine via Python language for training a machine. Meanwhile, the dataset can be accessed in future via https://doi.org/10.5281/zenodo.5213073.

The Neighborhood Effects in Korean Word Recognition Using Computation Model (계산주의적 모델을 이용한 한국어 시각단어 재인에서 나타나는 이웃효과)

  • Park, Ki-Nam;Kwon, You-An;Lim, Heui-Seok;Nam, Ki-Chun
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.295-297
    • /
    • 2007
  • This study suggests a computational model to inquire the roles of phonological information and orthography information in the process of visual word recognition among the courses of language information processing and the representation types of the mental lexicon. As the result of the study, the computational model showed the phonological and orthographic neighborhood effect among language phenomena which are shown in Korean word recognition, and showed proofs which implies that the mental lexicon is represented as phonological information in the process of Korean word recognition.

  • PDF