• Title/Summary/Keyword: Continuous Speech Recognition

Search Result 225, Processing Time 0.176 seconds

Vector Quantizer Based Speaker Normalization for Continuos Speech Recognition (연속음성 인식기를 위한 벡터양자화기 기반의 화자정규화)

  • Shin Ok-keun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.8
    • /
    • pp.583-589
    • /
    • 2004
  • Proposed is a speaker normalization method based on vector quantizer for continuous speech recognition (CSR) system in which no acoustic information is made use of. The proposed method, which is an improvement of the previously reported speaker normalization scheme for a simple digit recognizer, builds up a canonical codebook by iteratively training the codebook while the size of codebook is increased after each iteration from a relatively small initial size. Once the codebook established, the warp factors of speakers are estimated by comparing exhaustively the warped versions of each speaker's utterance with the codebook. Two sets of phones are used to estimate the warp factors: one, a set of vowels only. and the other, a set composed of all the Phonemes. A Piecewise linear warping function which corresponds to the estimated warp factor is adopted to warp the power spectrum of the utterance. Then the warped feature vectors are extracted to be used to train and to test the speech recognizer. The effectiveness of the proposed method is investigated by a set of recognition experiments using the TIMIT corpus and HTK speech recognition tool kit. The experimental results showed comparable recognition rate improvement with the formant based warping method.

Speech Data Collection for korean Speech Recognition (한국어 음성인식을 위한 음성 데이터 수집)

  • Park, Jong-Ryeal;Kwon, Oh-Wook;Kim, Do-Yeong;Choi, In-Jeong;Jeong, Ho-Young;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.74-81
    • /
    • 1995
  • This paper describes the development of speech databases for the Korean language which were constructed at Communications Research Laboratory in KAIST. The procedure and environment to construct the speech database are presented in detail, and the phonetic and linguistic properties of the databases are presented. the databases were intended for use in designing and evaluating speech recognition algorithms. The databases consist of five different sets of speech contents : trade-related continuous speech with 3,000 words, variable-length connected digits, phoneme-balanced 75 isolated words, 500 isolated Korean provincial names, and Korean A-set words.

  • PDF

Study on the Recognition of Spoken Korean Continuous Digits Using Phone Network (음성망을 이용한 한국어 연속 숫자음 인식에 관한 연구)

  • Lee, G.S.;Lee, H.J.;Byun, Y.G.;Kim, S.H.
    • Proceedings of the KIEE Conference
    • /
    • 1988.07a
    • /
    • pp.624-627
    • /
    • 1988
  • This paper describes the implementation of recognition of speaker - dependent Korean spoken continuous digits. The recognition system can be divided into two parts, acoustic - phonetic processor and lexical decoder. Acoustic - phonetic processor calculates the feature vectors from input speech signal and the performs frame labelling and phone labelling. Frame labelling is performed by Bayesian classification method and phone labelling is performed using labelled frame and posteriori probability. The lexical decoder accepts segments (phones) from acoustic - phonetic processor and decodes its lexical structure through phone network which is constructed from phonetic representation of ten digits. The experiment carried out with two sets of 4continuous digits, each set is composed of 35 patterns. An evaluation of the system yielded a pattern accuracy of about 80 percent resulting from a word accuracy of about 95 percent.

  • PDF

Statistical Analysis of Korean Phonological Rules Using a Automatic Phonetic Transcription (발음열 자동 변환을 이용한 한국어 음운 변화 규칙의 통계적 분석)

  • Lee Kyong-Nim;Chung Minhwa
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.81-85
    • /
    • 2002
  • We present a statistical analysis of Korean phonological variations using automatic generation of phonetic transcription. We have constructed the automatic generation system of Korean pronunciation variants by applying rules modeling obligatory and optional phonemic changes and allophonic changes. These rules are derived from knowledge-based morphophonological analysis and government standard pronunciation rules. This system is optimized for continuous speech recognition by generating phonetic transcriptions for training and constructing a pronunciation dictionary for recognition. In this paper, we describe Korean phonological variations by analyzing the statistics of phonemic change rule applications for the 60,000 sentences in the Samsung PBS(Phonetic Balanced Sentence) Speech DB. Our results show that the most frequently happening obligatory phonemic variations are in the order of liaison, tensification, aspirationalization, and nasalization of obstruent, and that the most frequently happening optional phonemic variations are in the order of initial consonant h-deletion, insertion of final consonant with the same place of articulation as the next consonants, and deletion of final consonant with the same place of articulation as the next consonants. These statistics can be used for improving the performance of speech recognition systems.

  • PDF

Status Report on the Korean Speech Recognition Platform (한국어 음성인식 플랫폼 개발현황)

  • Kwon, Oh-Wook;Kwon, Suk-Bong;Jang, Gyu-Cheol;Yun, Sung-rack;Kim, Yong-Rae;Jang, Kwang-Dong;Kim, Hoi-Rin;Yoo, Chang-Dong;Kim, Bong-Wan;Lee, Yong-Ju
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.215-218
    • /
    • 2005
  • This paper reports the current status of development of the Korean speech recognition platform (ECHOS). We implement new modules including ETSI feature extraction, backward search with trigram, and utterance verification. The ETSI feature extraction module is implemented by converting the public software to an object-oriented program. We show that trigram language modeling in the backward search pass reduces the word error rate from 23.5% to 22% on a large vocabulary continuous speech recognition task. We confirm the utterance verification module by examining word graphs with confidence score.

  • PDF

Enhancing Speech Recognition with Whisper-tiny Model: A Scalable Keyword Spotting Approach (Whisper-tiny 모델을 활용한 음성 분류 개선: 확장 가능한 키워드 스팟팅 접근법)

  • Shivani Sanjay Kolekar;Hyeonseok Jin;Kyungbaek Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.774-776
    • /
    • 2024
  • The effective implementation of advanced speech recognition (ASR) systems necessitates the deployment of sophisticated keyword spotting models that are both responsive and resource-efficient. The initial local detection of user interactions is crucial as it allows for the selective transmission of audio data to cloud services, thereby reducing operational costs and mitigating privacy risks associated with continuous data streaming. In this paper, we address these needs and propose utilizing the Whisper-Tiny model with fine-tuning process to specifically recognize keywords from google speech dataset which includes 65000 audio clips of keyword commands. By adapting the model's encoder and appending a lightweight classification head, we ensure that it operates within the limited resource constraints of local devices. The proposed model achieves the notable test accuracy of 92.94%. This architecture demonstrates the efficiency as on-device model with stringent resources leading to enhanced accessibility in everyday speech recognition applications.

Postprocessing of A Speech Recognition using the Morphological Anlaysis Technique (형태소 분석 기법을 이용한 음성 인식 후처리)

  • 박미성;김미진;김계성;김성규;이문희;최재혁;이상조
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.4
    • /
    • pp.65-77
    • /
    • 1999
  • There are two problems which will be processed to graft a continuous speech recognition results into natural language processing technique. First, the speaking's unit isn't consistent with text's spacing unit. Second, when it is to be pronounced the phonological alternation phenomena occur inside morphemes or among morphemes. In this paper, we implement the postprocessing system of a continuous speech recognition that above all, solve two problems using the eo-jeol generator and syllable recoveror and morphologically analyze the generated results and then correct the failed results through the corrector. Our system experiments with two kinds of speech corpus, i.e., a primary school text book and editorial corpus. The successful percentage of the former is 93.72%, that of the latter is 92.26%. As results of experiment, we verified that our system is stable regardless the sorts of corpus.

  • PDF

The Effect of Strong Syllables on Lexical Segmentation in English Continuous Speech by Korean Speakers (강음절이 한국어 화자의 영어 연속 음성의 어휘 분절에 미치는 영향)

  • Kim, Sunmi;Nam, Kichun
    • Phonetics and Speech Sciences
    • /
    • v.5 no.2
    • /
    • pp.43-51
    • /
    • 2013
  • English native listeners have a tendency to treat strong syllables in a speech stream as the potential initial syllables of new words, since the majority of lexical words in English have a word-initial stress. The current study investigates whether Korean (L1) - English (L2) late bilinguals perceive strong syllables in English continuous speech as word onsets, as English native listeners do. In Experiment 1, word-spotting was slower when the word-initial syllable was strong, indicating that Korean listeners do not perceive strong syllables as word onsets. Experiment 2 was conducted in order to avoid any possibilities that the results of Experiment 1 may be due to the strong-initial targets themselves used in Experiment 1 being slower to recognize than the weak-initial targets. We employed the gating paradigm in Experiment 2, and measured the Isolation Point (IP, the point at which participants correctly identify a word without subsequently changing their minds) and the Recognition Point (RP, the point at which participants correctly identify the target with 85% or greater confidence) for the targets excised from the non-words in the two conditions of Experiment 1. Both the mean IPs and the mean RPs were significantly earlier for the strong-initial targets, which means that the results of Experiment 1 reflect the difficulty of segmentation when the initial syllable of words was strong. These results are consistent with Kim & Nam (2011), indicating that strong syllables are not perceived as word onsets for Korean listeners and interfere with lexical segmentation in English running speech.

A Study on Variation and Determination of Gaussian function Using SNR Criteria Function for Robust Speech Recognition (잡음에 강한 음성 인식에서 SNR 기준 함수를 사용한 가우시안 함수 변형 및 결정에 관한 연구)

  • 전선도;강철호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.7
    • /
    • pp.112-117
    • /
    • 1999
  • In case of spectral subtraction for noise robust speech recognition system, this method often makes loss of speech signal. In this study, we propose a method that variation and determination of Gaussian function at semi-continuous HMM(Hidden Markov Model) is made on the basis of SNR criteria function, in which SNR means signal to noise ratio between estimation noise and subtracted signal per frame. For proving effectiveness of this method, we show the estimation error to be related with the magnitude of estimated noise through signal waveform. For this reason, Gaussian function is varied and determined by SNR. When we test recognition rate by computer simulation under the noise environment of driving car over the speed of 80㎞/h, the proposed Gaussian decision method by SNR turns out to get more improved recognition rate compared with the frequency subtracted and non-subtracted cases.

  • PDF

A Study on Word Juncture Modeling for Continuous Speech Recognition of Korean Language (한국어 연속음성 인식을 위한 단어 결합 모델링에 관한 연구)

  • Choi, In-Jeong;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.5
    • /
    • pp.24-31
    • /
    • 1994
  • In this paper, we study continuous speech recognition of Korean language using acoustic models of word juncture coarticulation. To alleviate the performance degradation due to coarticulation problems, we use context-dependent units that model inter-word transitions in addition to intra-word transitions. In all cases the initial phone of each word has to be specified for each possible final phone of the previous word similarly for the final phone of each word. To improve the robustness of the HMM parameters, the covariance matrix is smoothed. We also use position-dependent units to improve the discriminative power between units. Simulation results show that when the improved models of word juncture coarticulation are used. the recognition performance is considerably improved compared to the baseline system using only intra-word units.

  • PDF