• Title/Summary/Keyword: speech rates

Search Result 271, Processing Time 0.021 seconds

A Study on the Isolated word Recognition Using One-Stage DMS/DP for the Implementation of Voice Dialing System

  • Seong-Kwon Lee
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06a
    • /
    • pp.1039-1045
    • /
    • 1994
  • The speech recognition systems using VQ have usually the problem decreasing recognition rate, MSVQ assigning the dissimilar vectors to a segment. In this paper, applying One-stage DMS/DP algorithm to the recognition experiments, we can solve these problems to what degree. Recognition experiment is peformed for Korean DDD area names with DMS model of 20 sections and word unit template. We carried out the experiment in speaker dependent and speaker independent, and get a recognition rates of 97.7% and 81.7% respectively.

  • PDF

Perception of English Consonants in Different Prosodic Positions by Korean Learners of English

  • Jang, Mi
    • Phonetics and Speech Sciences
    • /
    • v.6 no.1
    • /
    • pp.11-19
    • /
    • 2014
  • The focus of this study was to investigate whether there is a position effect on identification accuracy of L2 consonants by Korean listeners and to examine how Korean listeners perceive the phonetic properties of initial and final consonants produced by a Korean learner of English and an English native speaker. Most studies examining L2 learners' perception of L2 sounds have focused on the segmental level but very few studies have examined the role of prosodic position in L2 learners' perception. In the present study, an identification test was conducted for English consonants /p, t, k, f, ɵ, s, ʃ/ in CVC prosodic structures. The results revealed that Korean listeners identified syllable-initial consonants more accurately than syllable-final consonants. The perceptual accuracy in syllable initial consonants may be attributable to the enhanced phonetic properties in the initial consonants. A significant correlation was found between error rates and F2 onset/offset for stops and fricatives, and between perceptual accuracy and RMS burst energy for stops. However, the identification error patterns were found to be different across consonant types and between the different language speakers. In the final position, Korean listeners had difficulty in identifying /p/, /f/, /ɵ/, and /s/ when they were produced by a Korean speaker and showed more errors in /p/, /t/, /f/, /ɵ/, and /s/ when they were spoken by an English native speaker. Comparing to the perception of English consonants spoken by a Korean speaker, greater error rates and diverse error patterns were found in the perception of consonants produced by an English native speaker. The present study provides the evidence that prosodic position plays a crucial role in the perception of L2 segments.

Isolated Digit and Command Recognition in Car Environment (자동차 환경에서의 단독 숫자음 및 명령어 인식)

  • 양태영;신원호;김지성;안동순;이충용;윤대희;차일환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.2
    • /
    • pp.11-17
    • /
    • 1999
  • This paper proposes an observation probability smoothing technique for the robustness of a discrete hidden Markov(DHMM) model based speech recognizer. Also, an appropriate noise robust processing in car environment is suggested from experimental results. The noisy speech is often mislabeled during the vector quantization process. To reduce the effects of such mislabelings, the proposed technique increases the observation probability of similar codewords. For the noise robust processing in car environment, the liftering on the distance measure of feature vectors, the high pass filtering, and the spectral subtraction methods are examined. Recognition experiments on the 14-isolated words consists of the Korean digits and command words were performed. The database was recorded in a stopping car and a running car environments. The recognition rates of the baseline recognizer were 97.4% in a stopping situation and 59.1% in a running situation. Using the proposed observation probability smoothing technique, the liftering, the high pass filtering, and the spectral subtraction the recognition rates were enhanced to 98.3% in a stopping situation and to 88.6% in a running situation.

  • PDF

Syllable Recognition of HMM using Segment Dimension Compression (세그먼트 차원압축을 이용한 HMM의 음절인식)

  • Kim, Joo-Sung;Lee, Yang-Woo;Hur, Kang-In;Ahn, Jum-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.2
    • /
    • pp.40-48
    • /
    • 1996
  • In this paper, a 40 dimensional segment vector with 4 frame and 7 frame width in every monosyllable interval was compressed into a 10, 14, 20 dimensional vector using K-L expansion and neural networks, and these was used to speech recognition feature parameter for CHMM. And we also compared them with CHMM added as feature parameter to the discrete duration time, the regression coefficients and the mixture distribution. In recognition test at 100 monosyllable, recognition rates of CHMM +${\bigtriangleup}$MCEP, CHMM +MIX and CHMM +DD respectively improve 1.4%, 2.36% and 2.78% over 85.19% of CHMM. And those using vector compressed by K-L expansion are less than MCEP + ${\bigtriangleup}$MCEP but those using K-L + MCEP, K-L + ${\bigtriangleup}$MCEP are almost same. Neural networks reflect more the speech dynamic variety than K-L expansion because they use the sigmoid function for the non-linear transform. Recognition rates using vector compressed by neural networks are higher than those using of K-L expansion and other methods.

  • PDF

Nonfluency Characteristics of Children in Multicultural Families (다문화가정 아동의 비유창성 특성)

  • Shin, Myung-Sun
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.3
    • /
    • pp.254-261
    • /
    • 2011
  • The purpose of the present study was to investigate the characteristics of disfluency in 3~5 year-old multicultural family children(MFC). 24 children(12 MFC, 12 Korean monolingual children, KMC with the same chronological age and language age) participated in this study. The experimental tasks consisted of story retelling tasks(SRT) and picture description tasks(PDT). In all the tasks, the scores of total disfluency of the MFC were significantly higher than those of the KMC. In all the tasks, the frequency of abnormal disfluency of the MFC were significantly higher than those of the KMC and the speech rates of the MFC were significantly lower than those of the KMC. The disfluency observed in MFC indicates that language ability influences on their disfluencies and fluency support of MFC is an important factor in general language support.

A Study on the Improvements of the Speech Quality by using Distribution Characteristics of LSP parameters in the EVRC(Enhanced Variable Rate Codec) (LSP 파라미터의 분포특성을 이용한 EVRC의 음질개선에 관한 연구)

  • Min, So-Yeon;Na, Deok-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.12
    • /
    • pp.5843-5848
    • /
    • 2011
  • To improve the efficiency of the channel spectrum and to reduce the power consumption of the system in EVRC, the voice signal is compressed and transmitted only when the user speaks to. In addition to this, voice frames are divided into three rates 1, 1/2 and 1/8 and each frame is handled differently. For example, we assumed that the input is silence region if the 1/8 rate is used. In this paper, the sections are firstly separated into the voiced speech signal region, unvoiced speech signal region, and silence region by using distribution characteristics of LSP parameters. Then the paper suggested to encode 1 rate for the voiced speech signal, 1/2 rate for the unvoiced speech signal region, 1/8 rate for the silence region. In other words, traditional way of transmission is used when sending full rate in the EVRC. However, when sending half rate, the voice is firstly distinguished between voiced and unvoiced. If the voice is distinguished as voiced, voice is converted into full rate before the transmission. If it is distinguished as silence, EVRC's basic rate is applied. In the experimental results with SNR, ASDM, transmission bit rate measurement, we have demonstrated that voice quality was improved by using the proposed algorithm.

Speech Recognition for the Korean Vowel 'ㅣ' based on Waveform-feature Extraction and Neural-network Learning (파형 특징 추출과 신경망 학습 기반 모음 'ㅣ' 음성 인식)

  • Rho, Wonbin;Lee, Jongwoo;Lee, Jaewon
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.2
    • /
    • pp.69-76
    • /
    • 2016
  • With the recent increase of the interest in IoT in almost all areas of industry, computing technologies have been increasingly applied in human environments such as houses, buildings, cars, and streets; in these IoT environments, speech recognition is being widely accepted as a means of HCI. The existing server-based speech recognition techniques are typically fast and show quite high recognition rates; however, an internet connection is necessary, and complicated server computing is required because a voice is recognized by units of words that are stored in server databases. This paper, as a successive research results of speech recognition algorithms for the Korean phonemic vowel 'ㅏ', 'ㅓ', suggests an implementation of speech recognition algorithms for the Korean phonemic vowel 'ㅣ'. We observed that almost all of the vocal waveform patterns for 'ㅣ' are unique and different when compared with the patterns of the 'ㅏ' and 'ㅓ' waveforms. In this paper we propose specific waveform patterns for the Korean vowel 'ㅣ' and the corresponding recognition algorithms. We also presents experiment results showing that, by adding neural-network learning to our algorithm, the voice recognition success rate for the vowel 'ㅣ' can be increased. As a result we observed that 90% or more of the vocal expressions of the vowel 'ㅣ' can be successfully recognized when our algorithms are used.

Frequency of grammar items for Korean substitution of /u/ for /o/ in the word-final position (어말 위치 /ㅗ/의 /ㅜ/ 대체 현상에 대한 문법 항목별 출현빈도 연구)

  • Yoon, Eunkyung
    • Phonetics and Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.33-42
    • /
    • 2020
  • This study identified the substitution of /u/ for /o/ (e.g., pyəllo [pyəllu]) in Korean based on the speech corpus as a function of grammar items. Korean /o/ and /u/ share the vowel feature [+rounded], but are distinguished in terms of tongue height. However, researchers have reported that the merger of Korean /o/ and /u/ is in progress, making them indistinguishable. Thus, in this study, the frequency of the phonetic manifestation /u/ of the underlying form of /o/ for each grammar item was calculated in The Korean Corpus of Spontaneous Speech (Seoul Corpus 2015) which is a large corpus from a total of 40 speakers from Seoul or Gyeonggi-do. It was then confirmed that linking endings, particles, and adverbs ending with /o/ in the word-final position were substituted for /u/ approximately 50% of the stimuli, whereas, in nominal items, they were replaced at a frequency of less than 5%. The high rates of substitution were the special particle "-do[du]" (59.6%) and the linking ending "-go[gu]" (43.5%) among high-frequency items. Observing Korean pronunciation in real life provides deep insight into its theoretical implications in terms of speech recognition.

Analysis and Prediction of Prosodic Phrage Boundary (운율구 경계현상 분석 및 텍스트에서의 운율구 추출)

  • Kim, Sang-Hun;Seong, Cheol-Jae;Lee, Jung-Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.1
    • /
    • pp.24-32
    • /
    • 1997
  • This study aims to describe, at one aspect, the relativity between syntactic structure and prosodic phrasing, and at the other, to establish a suitable phrasing pattern to produce more natural synthetic speech. To get meaningful results, all the word boundaries in the prosodic database were statistically analyzed, and assigned by the proper boundary type. The resulting 10 types of prosodic boundaries were classified into 3 types according to the strength of the breaks, which are zero, minor, and major break respectively. We have found out that the durational information was a main cue to determine the major prosodic boundary. Using the bigram and trigram of syntactic information, we predicted major and minor classification of boundary types. With brigram model, we obtained the correct major break prediction rates of 4.60%, 38.2%, the insertion error rates of 22.8%, 8.4% on each Test-I and Test-II text database respectively. With trigram mode, we also obtained the correct major break prediction rates of 58.3%, 42.8%, the insertion error rates of 30.8%, 42.8%, the insertion error rates of 30.8%, 11.8% on Test-I and Test-II text database respectively.

  • PDF

Emotion recognition in speech using hidden Markov model (은닉 마르코프 모델을 이용한 음성에서의 감정인식)

  • 김성일;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.3
    • /
    • pp.21-26
    • /
    • 2002
  • This paper presents the new approach of identifying human emotional states such as anger, happiness, normal, sadness, or surprise. This is accomplished by using discrete duration continuous hidden Markov models(DDCHMM). For this, the emotional feature parameters are first defined from input speech signals. In this study, we used prosodic parameters such as pitch signals, energy, and their each derivative, which were then trained by HMM for recognition. Speaker adapted emotional models based on maximum a posteriori(MAP) estimation were also considered for speaker adaptation. As results, the simulation performance showed that the recognition rates of vocal emotion gradually increased with an increase of adaptation sample number.

  • PDF