• Title/Summary/Keyword: Speech level

Search Result 678, Processing Time 0.022 seconds

A Study of Speech Control Tags Based on Semantic Information of a Text (텍스트의 의미 정보에 기반을 둔 음성컨트롤 태그에 관한 연구)

  • Chang, Moon-Soo;Chung, Kyeong-Chae;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.187-200
    • /
    • 2006
  • The speech synthesis technology is widely used and its application area is also being broadened to an automatic response service, a learning system for handicapped person, etc. However, the sound quality of the speech synthesizer has not yet reached to the satisfactory level of users. To make a synthesized speech, the existing synthesizer generates rhythms only by the interval information such as space and comma or by several punctuation marks such as a question mark and an exclamation mark so that it is not easy to generate natural rhythms of people even though it is based on mass speech database. To make up for the problem, there is a way to select rhythms after processing language from a higher level information. This paper proposes a method for generating tags for controling rhythms by analyzing the meaning of sentence with speech situation information. We use the Systemic Functional Grammar (SFG) [4] which analyzes the meaning of sentence with speech situation information considering the sentence prior to the given one, the situation of a conversation, the relationship among people in the conversation, etc. In this study, we generate Semantic Speech Control Tag (SSCT) by the result of SFG's meaning analysis and the voice wave analysis.

  • PDF

A New Stylization Method using Least-Square Error Minimization on Segmental Pitch Contour (최소 자승오차 방식을 이용한 세그먼트 피치패턴의 정형화)

  • 이정철
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.107-110
    • /
    • 1994
  • In this paper, we describe the features of the fundamental frequency contour of Korean read speech, and propose a new stylization method to characterize the Fø pattern of segments. Our algorithm consists of three stylization processes : the segment level, the syllable level, and the sord level. For stylization of Fø contour in the segment level , we applied least square error minimization method to determine Fø values at initial, medial, and final position in a segment. In the syllable level, we determine the stylized Fø pattern of a syllable using the mean Fø value of each word and style information for each word, syllable and segment, we reconstruct Fø contour of sentences. The simulation results show that the error is less than 10% of the actual Fø contour for each sentence. In perception test, there is little difference between the synthesized speech with the original difference between the synthesized speech with the original Fø contour and the synthesized speech with the stylized Fø contour.

  • PDF

GENERATION OF MULTI-SYLLABLE NONSENSE WORDS FOR THE ASSESSMENT OF KOREAN TEXT-TO SPEECH SYSTEM (한국어 문장음성합성 시스템의 평가를 위한 다음절 무의미단어의 생성 및 평가에 관한 연구)

  • 조철우
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.338-341
    • /
    • 1994
  • In this paper we propose a method to generate a multisyllable onsense wordest for the purpose of synthetic speech assessment and applies th ewordest to assess one commercial text-to-speech system. Some results about the experiment is suggested and it is verified that the generated nonsense wordset can be used to assess the intelligibility of the synthesizer in phoneme level or in phonemic environmental level. From the experimental results it is verified that such multi-syllable nonsense wordset can be useful for the assessment of synthesized speech.

  • PDF

Development of Ambulatory Speech Audiometric System (휴대용 어음청력검사 시스템 구현)

  • Shin, Seung-Won;Kim, Kyeong-Seop;Lee, Sang-Min;Im, Won-Jin;Lee, Jeong-Whan;Kim, Dong-Jun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.3
    • /
    • pp.645-654
    • /
    • 2009
  • In this study, we present an efficient ambulatory speech audiometric system to detect one's hearing problems at an earlier stage as possible without his or her visit to the audiometric testing facility such in a hospital or a clinic. To estimate a person's hearing threshold level in terms of speech sound response in his or her local environment, a digital assistant(PDA) device is used to generate the speech sound with implementing audiometric Graphic User Interface(GUI) system. Furthermore, a supra-aural earphone is used to measure a subject's hearing threshold level in terms of speech sound by the compensating the transducer's gain by adopting speech sound calibration system.

Convergent Analysis on the Speech Sound of Typically Developing Children Aged 3 to 5 : Focused on Word Level and Connected Speech Level (3-5세 일반아동의 말소리에 대한 융합적 분석: 단어와 자발화를 중심으로)

  • Kim, Yun-Joo;Park, Hyun-Ju
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.6
    • /
    • pp.125-132
    • /
    • 2018
  • This study was to investigate the speech sound production characteristics and evaluation aspects of preschool children through word test and connected speech test. For this, the authors conducted Assessment of Phonology and Articulation for Children(APAC) to 72 normal children(24 three-, four-, and five-year-olds each) and analyzed difference in percent of correct consonant(PCC) and intelligibility according to age and sex, correlation between PCC and intelligibility, and speech sound error patterns. PCC and intelligibility increased with age but there was no difference according to sex. The correlation was statistically significant in 5-year-old group. Speech sound error patterns were different in the two tests. This study showed that children's speech sound production varied according to language unit. Therefore, both types of tests should be done to grasp their speech sound production ability properly. This suggests that current standard to identify language impairment only by PCC of word level requires review and further studies.

How Different are Learner Speech and Loanword Phonology?

  • Kim, Jong-Mi
    • Phonetics and Speech Sciences
    • /
    • v.1 no.3
    • /
    • pp.3-18
    • /
    • 2009
  • Do loanword properties emerge in the acquisition of a foreign language and if so, how? Classic studies in adult language learning assumed loanword properties that range from near-ceiling to near-chance level of appearance depending on speech proficiency. The present research argues that such variations reflect different phonological types, rather than speech proficiency. To investigate the difference between learner speech and loanword phonology, the current research analyzes the speech data from five different proficiency levels of 92 Korean speakers who read 19 pairs of English words and sentences that contained loanwords. The experimental method is primarily an acoustical one, by which the phonological cause in the loanwords (e.g., the insertion of [$\Box$] at the end of the word stamp) would be attested to appear in learner speech, in comparison with native speech from 11 English speakers and 11 Korean speakers. The data investigated for the research are of segment deletion, insertion, substitution, and alternation in both learner speech and the native speech. The results indicate that learner speech does not present the loanword properties in many cases, but depends on the types of phonological causes. The relatively easy acquisition of target pronunciation is evidenced in the cases of segment deletion, insertion, substitution, and alternation, except when the loanword property involves the successful command of the target phonology such as the de-aspiration of [p] in apple. Such a case of difficult learning draws a sharp distinction from the cases of easy learning in the development of learner speech, particularly beyond the intermediate level of proficiency. Overall, learner speech departs from loanword phonology and develops toward the native speech value, depending on phonological contrasts in the native and foreign languages.

  • PDF

Speech syntheis engine for TTS (TTS 적용을 위한 음성합성엔진)

  • 이희만;김지영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.6
    • /
    • pp.1443-1453
    • /
    • 1998
  • This paper presents the speech synthesis engine that converts the character strings kept in a computer memory into the synthesized speech sounds with enhancing the intelligibility and the naturalness by adapting the waveform processing method. The speech engine using demisyllable speech segments receives command streams for pitch modification, duration and energy control. The command based engine isolates the high level processing of text normalization, letter-to-sound and the lexical analysis and the low level processing of signal filtering and pitch processing. The TTS(Text-to-Speech) system implemented by using the speech synthesis engine has three independent object modules of the Text-Normalizer, the Commander and the said Speech Synthesis Engine those of which are easily replaced by other compatible modules. The architecture separating the high level and the low level processing has the advantage of the expandibility and the portability because of the mix-and-match nature.

  • PDF

Comparison Research of Non-Target Sentence Rejection on Phoneme-Based Recognition Networks (음소기반 인식 네트워크에서의 비인식 대상 문장 거부 기능의 비교 연구)

  • Kim, Hyung-Tai;Ha, Jin-Young
    • MALSORI
    • /
    • no.59
    • /
    • pp.27-51
    • /
    • 2006
  • For speech recognition systems, rejection function as well as decoding function is necessary to improve the reliability. There have been many research efforts on out-of-vocabulary word rejection, however, little attention has been paid on non-target sentence rejection. Recently pronunciation approaches using speech recognition increase the need for non-target sentence rejection to provide more accurate and robust results. In this paper, we proposed filler model method and word/phoneme detection ratio method to implement non-target sentence rejection system. We made performance evaluation of filler model along to word-level, phoneme-level, and sentence-level filler models respectively. We also perform the similar experiment using word-level and phoneme-level word/phoneme detection ratio method. For the performance evaluation, the minimized average of FAR and FRR is used for comparing the effectiveness of each method along with the number of words of given sentences. From the experimental results, we got to know that word-level method outperforms the other methods, and word-level filler mode shows slightly better results than that of word detection ratio method.

  • PDF

Effects of Phonetic Complexity and Articulatory Severity on Percentage of Correct Consonant and Speech Intelligibility in Adults with Dysarthria (조음복잡성 및 조음중증도에 따른 마비말장애인의 자음정확도와 말명료도)

  • Song, HanNae;Lee, Youngmee;Sim, HyunSub;Sung, JeeEun
    • Phonetics and Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.39-46
    • /
    • 2013
  • This study examined the effects of phonetic complexity and articulatory severity on Percentage of Correct Consonant (PCC) and speech intelligibility in adults with dysarthria. Speech samples of thirty-two words from APAC (Assessment of Phonology and Articulation of Children) were collected from 38 dysarthric speakers with one of two different levels of articulatory severities (mild or mild-moderate). A PCC and speech intelligibility score was calculated by the 4 levels of phonetic complexity. Two-way mixed ANOVA analysis revealed: (1) the group with mild severity showed significantly higher PCC and speech intelligibility scores than the mild-moderate articulatory severity group, (2) PCC at the phonetic complexity level 4 was significantly lower than those at the other levels and (3) an interaction effect of articulatory severity and phonetic complexity was observed only on the PCC. Pearson correlation analysis demonstrated the degree of correlation between PCC and speech intelligibility varied depending on the level of articulatory severity and phonetic complexity. The clinical implications of the findings were discussed.

Speech emotion recognition based on genetic algorithm-decision tree fusion of deep and acoustic features

  • Sun, Linhui;Li, Qiu;Fu, Sheng;Li, Pingan
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.462-475
    • /
    • 2022
  • Although researchers have proposed numerous techniques for speech emotion recognition, its performance remains unsatisfactory in many application scenarios. In this study, we propose a speech emotion recognition model based on a genetic algorithm (GA)-decision tree (DT) fusion of deep and acoustic features. To more comprehensively express speech emotional information, first, frame-level deep and acoustic features are extracted from a speech signal. Next, five kinds of statistic variables of these features are calculated to obtain utterance-level features. The Fisher feature selection criterion is employed to select high-performance features, removing redundant information. In the feature fusion stage, the GA is is used to adaptively search for the best feature fusion weight. Finally, using the fused feature, the proposed speech emotion recognition model based on a DT support vector machine model is realized. Experimental results on the Berlin speech emotion database and the Chinese emotion speech database indicate that the proposed model outperforms an average weight fusion method.