• Title/Summary/Keyword: Vocabulary recognition

Search Result 221, Processing Time 0.027 seconds

Speech Interactive Agent on Car Navigation System Using Embedded ASR/DSR/TTS

  • Lee, Heung-Kyu;Kwon, Oh-Il;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.181-192
    • /
    • 2004
  • This paper presents an efficient speech interactive agent rendering smooth car navigation and Telematics services, by employing embedded automatic speech recognition (ASR), distributed speech recognition (DSR) and text-to-speech (ITS) modules, all while enabling safe driving. A speech interactive agent is essentially a conversational tool providing command and control functions to drivers such' as enabling navigation task, audio/video manipulation, and E-commerce services through natural voice/response interactions between user and interface. While the benefits of automatic speech recognition and speech synthesizer have become well known, involved hardware resources are often limited and internal communication protocols are complex to achieve real time responses. As a result, performance degradation always exists in the embedded H/W system. To implement the speech interactive agent to accommodate the demands of user commands in real time, we propose to optimize the hardware dependent architectural codes for speed-up. In particular, we propose to provide a composite solution through memory reconfiguration and efficient arithmetic operation conversion, as well as invoking an effective out-of-vocabulary rejection algorithm, all made suitable for system operation under limited resources.

  • PDF

Semantic Ontology Speech Recognition Performance Improvement using ERB Filter (ERB 필터를 이용한 시맨틱 온톨로지 음성 인식 성능 향상)

  • Lee, Jong-Sub
    • Journal of Digital Convergence
    • /
    • v.12 no.10
    • /
    • pp.265-270
    • /
    • 2014
  • Existing speech recognition algorithm have a problem with not distinguish the order of vocabulary, and the voice detection is not the accurate of noise in accordance with recognized environmental changes, and retrieval system, mismatches to user's request are problems because of the various meanings of keywords. In this article, we proposed to event based semantic ontology inference model, and proposed system have a model to extract the speech recognition feature extract using ERB filter. The proposed model was used to evaluate the performance of the train station, train noise. Noise environment of the SNR-10dB, -5dB in the signal was performed to remove the noise. Distortion measure results confirmed the improved performance of 2.17dB, 1.31dB.

Selective Speech Feature Extraction using Channel Similarity in CHMM Vocabulary Recognition (CHMM 어휘인식에서 채널 유사성을 이용한 선택적 음성 특징 추출)

  • Oh, Sang Yeon
    • Journal of Digital Convergence
    • /
    • v.11 no.10
    • /
    • pp.453-458
    • /
    • 2013
  • HMM Speech recognition systems have a few weaknesses, including failure to recognize speech due to the mixing of environment noise other voices. In this paper, we propose a speech feature extraction methode using CHMM for extracting selected target voice from mixture of voices and noises. we make use of channel similarity and correlate relation for the selective speech extraction composes. This proposed method was validated by showing that the average distortion of separation of the technique decreased by 0.430 dB. It was shown that the performance of the selective feature extraction is better than another system.

Status Report on the Korean Speech Recognition Platform (한국어 음성인식 플랫폼 개발현황)

  • Kwon, Oh-Wook;Kwon, Suk-Bong;Jang, Gyu-Cheol;Yun, Sung-rack;Kim, Yong-Rae;Jang, Kwang-Dong;Kim, Hoi-Rin;Yoo, Chang-Dong;Kim, Bong-Wan;Lee, Yong-Ju
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.215-218
    • /
    • 2005
  • This paper reports the current status of development of the Korean speech recognition platform (ECHOS). We implement new modules including ETSI feature extraction, backward search with trigram, and utterance verification. The ETSI feature extraction module is implemented by converting the public software to an object-oriented program. We show that trigram language modeling in the backward search pass reduces the word error rate from 23.5% to 22% on a large vocabulary continuous speech recognition task. We confirm the utterance verification module by examining word graphs with confidence score.

  • PDF

A Study on the Rejection Capability Based on Anti-phone Modeling (반음소 모델링을 이용한 거절기능에 대한 연구)

  • 김우성;구명완
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.3
    • /
    • pp.3-9
    • /
    • 1999
  • This paper presents the study on the rejection capability based on anti-phone modeling for vocabulary independent speech recognition system. The rejection system detects and rejects out-of-vocabulary words which were not included in candidate words which are defined while the speech recognizer is made. The rejection system can be classified into two categories by their implementation methods, keyword spotting method and utterance verification method. The keyword spotting method uses an extra filler model as a candidate word as well as keyword models. The utterance verification method uses the anti-models for each phoneme for the calculation of confidence score after it has constructed the anti-models for all phonemes. We implemented an utterance verification algorithm which can be used for vocabulary independent speech recognizer. We also compared three kinds of means for the calculation of confidence score, and found out that the geometric mean had shown the best result. For the normalization of confidence score, usually Sigmoid function is used. On using it, we compared the effect of the weight constant for Sigmoid function and determined the optimal value. And we compared the effects of the size of cohort set, the results showed that the larger set gave the better results. And finally we found out optimal confidence score threshold value. In case of using the threshold value, the overall recognition rate including rejection errors was about 76%. This results are going to be adapted for stock information system based on speech recognizer which is currently provided as an experimental service by Korea Telecom.

  • PDF

Korean Emotion Vocabulary: Extraction and Categorization of Feeling Words (한국어 감정표현단어의 추출과 범주화)

  • Sohn, Sun-Ju;Park, Mi-Sook;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.15 no.1
    • /
    • pp.105-120
    • /
    • 2012
  • This study aimed to develop a Korean emotion vocabulary list that functions as an important tool in understanding human feelings. In doing so, the focus was on the careful extraction of most widely used feeling words, as well as categorization into groups of emotion(s) in relation to its meaning when used in real life. A total of 12 professionals (including Korean major graduate students) partook in the study. Using the Korean 'word frequency list' developed by Yonsei University and through various sorting processes, the study condensed the original 64,666 emotion words into a finalized 504 words. In the next step, a total of 80 social work students evaluated and classified each word for its meaning and into any of the following categories that seem most appropriate for inclusion: 'happiness', 'sadness', 'fear', 'anger', 'disgust', 'surprise', 'interest', 'boredom', 'pain', 'neutral', and 'other'. Findings showed that, of the 504 feeling words, 426 words expressed a single emotion, whereas 72 words reflected two emotions (i.e., same word indicating two distinct emotions), and 6 words showing three emotions. Of the 426 words that represent a single emotion, 'sadness' was predominant, followed by 'anger' and 'happiness'. Amongst 72 words that showed two emotions were mostly a combination of 'anger' and 'disgust', followed by 'sadness' and 'fear', and 'happiness' and 'interest'. The significance of the study is on the development of a most adaptive list of Korean feeling words that can be meticulously combined with other emotion signals such as facial expression in optimizing emotion recognition research, particularly in the Human-Computer Interface (HCI) area. The identification of feeling words that connote more than one emotion is also noteworthy.

  • PDF

HMM-based Speech Recognition using FSVQ and Fuzzy Concept (FSVQ와 퍼지 개념을 이용한 HMM에 기초를 둔 음성 인식)

  • 안태옥
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.90-97
    • /
    • 2003
  • This paper proposes a speech recognition based on HMM(Hidden Markov Model) using FSVQ(First Section Vector Quantization) and fuzzy concept. In the proposed paper, we generate codebook of First Section, and then obtain multi-observation sequences by order of large propabilistic values based on fuzzy rule from the codebook of the first section. Thereafter, this observation sequences of first section from codebooks is trained and in case of recognition, a word that has the most highest probability of first section is selected as a recognized word by same concept. Train station names are selected as the target recognition vocabulary and LPC cepstrum coefficients are used as the feature parameters. Besides the speech recognition experiments of proposed method, we experiment the other methods under same conditions and data. Through the experiment results, it is proved that the proposed method based on HMM using FSVQ and fuzzy concept is superior to tile others in recognition rate.

Noise Elimination Using Improved MFCC and Gaussian Noise Deviation Estimation

  • Sang-Yeob, Oh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.87-92
    • /
    • 2023
  • With the continuous development of the speech recognition system, the recognition rate for speech has developed rapidly, but it has a disadvantage in that it cannot accurately recognize the voice due to the noise generated by mixing various voices with the noise in the use environment. In order to increase the vocabulary recognition rate when processing speech with environmental noise, noise must be removed. Even in the existing HMM, CHMM, GMM, and DNN applied with AI models, unexpected noise occurs or quantization noise is basically added to the digital signal. When this happens, the source signal is altered or corrupted, which lowers the recognition rate. To solve this problem, each voice In order to efficiently extract the features of the speech signal for the frame, the MFCC was improved and processed. To remove the noise from the speech signal, the noise removal method using the Gaussian model applied noise deviation estimation was improved and applied. The performance evaluation of the proposed model was processed using a cross-correlation coefficient to evaluate the accuracy of speech. As a result of evaluating the recognition rate of the proposed method, it was confirmed that the difference in the average value of the correlation coefficient was improved by 0.53 dB.

Multi-layer Speech Processing System for Point-Of-Interest Recognition in the Car Navigation System (차량용 항법장치에서의 관심지 인식을 위한 다단계 음성 처리 시스템)

  • Bhang, Ki-Duck;Kang, Chul-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.1
    • /
    • pp.16-25
    • /
    • 2009
  • In the car environment that the first priority is a safety problem, the large vocabulary isolated word recognition system with POI domain is required as the optimal HMI technique. For the telematics terminal with a highly limited processing time and memory capacity, it is impossible to process more than 100,000 words in the terminal by the general speech recognition methods. Therefore, we proposed phoneme recognizer using the phonetic GMM and also PDM Levenshtein distance with multi-layer architecture for the POI recognition of telematics terminal. By the proposed methods, we obtained high performance in the telematics terminal with low speed processing and small memory capacity. we obtained the recognition rate of maximum 94.8% in indoor environment and of maximum 92.4% in the car navigation environments.

  • PDF

A Study on Speech Recognition in a Running Automobile (주행중인 자동차 환경에서의 음성인식 연구)

  • 양진우;김순협
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.5
    • /
    • pp.3-8
    • /
    • 2000
  • In this paper, we studied design and implementation of a robust speech recognition system in noisy car environment. The reference pattern used in the system is DMS(Dynamic Multi-Section). Two separate acoustic models, which are selected automatically depending on the noisy car environment for the speech in a car moving at below 80km/h and over 80km/h are proposed. PLP(Perceptual Linear Predictive) of order 13 is used for the feature vector and OSDP (One-Stage Dynamic Programming) is used for decoding. The system also has the function of editing the phone-book for voice dialing. The system yields a recognition rate of 89.75% for male speakers in SI (speaker independent) mode in a car running on a cemented express way at over 80km/h with a vocabulary of 33 words. The system also yields a recognition rate of 92.29% for male speakers in SI mode in a car running on a paved express way at over 80km/h.

  • PDF