• Title/Summary/Keyword: Synthetic vowels

Search Result 11, Processing Time 0.028 seconds

Perception Ability of Synthetic Vowels in Cochlear Implanted Children (모음의 포먼트 변형에 따른 인공와우 이식 아동의 청각적 인지변화)

  • Huh, Myung-Jin
    • MALSORI
    • /
    • no.64
    • /
    • pp.1-14
    • /
    • 2007
  • The purpose of this study was to examine the acoustic perception different by formants change for profoundly hearing impaired children with cochlear implants. The subjects were 10 children after 15 months of experience with the implant and mean of their chronological age was 8.4 years and Standard deviation was 2.9 years. The ability of auditory perception was assessed using acoustic-synthetic vowels. The acoustic-synthetic vowel was combined with F1, F2, and F3 into a vowel and produced 42 synthetic sound, using Speech GUI(Graphic User Interface) program. The data was deal with clustering analysis and on-line analytical processing for perception ability of acoustic synthetic vowel. The results showed that auditory perception scores of acoustic-synthetic vowels for cochlear implanted children were increased in F2 synthetic vowels compaire to those of F1. And it was found that they perceived the differences of vowels in terms of distance rates between F1 and F2 in specific vowel.

  • PDF

A Link between Perceived and Produced Vowel Spaces of Korean Learners of English (한국인 영어학습자의 지각 모음공간과 발화 모음공간의 연계)

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.6 no.3
    • /
    • pp.81-89
    • /
    • 2014
  • Korean English learners tend to have difficulty perceiving and producing English vowels. The purpose of this study is to examine a link between perceived and produced vowel spaces of Korean learners of English. Sixteen Korean male and female participants perceived two sets of English synthetic vowels on a computer monitor and rated their naturalness. The same participants produced English vowels in a carrier sentence with high and low pitch variation in a clear speaking mode. The author compared the perceived and produced vowel spaces in terms of the pitch and gender variables. Results showed that the perceived vowel spaces were not significantly different in either variables. Korean learners perceived the vowels similarly. They did not differentiate the tense-lax vowel pairs nor the low vowels. Secondly, the produced vowel spaces of the male and female groups showed a 25% difference which may have come from their physiological differences in the vocal tract length. Thirdly, the comparison of the perceived and produced vowel spaces revealed that although the vowel space patterns of the Korean male and female learners appeared similar, which may lead to a relative link between perception and production, statistical differences existed in some vowels because of the acoustical properties of the synthetic vowels, which may lead to an independent link. The author concluded that any comparison between the perceived and produced vowel space of nonnative speakers should be made cautiously. Further studies would be desirable to examine how Koreans would perceive different sets of synthetic vowels.

Effects of F1/F2 Manipulation on the Perception of Korean Vowels /o/ and /u/ (F1/F2의 변화가 한국어 /오/, /우/ 모음의 지각판별에 미치는 영향)

  • Yun, Jihyeon;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.5 no.3
    • /
    • pp.39-46
    • /
    • 2013
  • This study examined the perception of two Korean vowels using F1/F2 manipulated synthetic vowels. Previous studies indicated that there is an overlap between the acoustic spaces of Korean /o/ and /u/ in terms of the first two formants. A continuum of eleven synthetic vowels were used as stimuli. The experiment consisted of three tasks: an /o/ identification task (Yes-no), an /u/ identification task (Yes-no), and a forced choice identification task (/o/-/u/). ROC(Receiver Operating Characteristic) analysis and logistic regression were performed to calculate the boundary criterion of the two vowels along the stimulus continuum, and to predict the perceptual judgment on F1 and F2. The result indicated that the location between stimulus no.5 (F1 = 342Hz, F2 = 691Hz) and no.6 (F1 = 336Hz, F2 = 700Hz) was estimated as a perceptual boundary region between /o/ and /u/, while stimulus no.0 (F1=405Hz, F2=666Hz) and no.10 (F1=321Hz, F2=743Hz) were at opposite ends of the continuum. The influence of F2 was predominant over F1 on the perception of the vowel categories.

Spectral Characteristics and Formant Bandwidths of English Vowels by American Males with Different Speaking Styles (발화방식에 따른 미국인 남성 영어모음의 스펙트럼 특성과 포먼트 대역)

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.6 no.4
    • /
    • pp.91-99
    • /
    • 2014
  • Speaking styles tend to have an influence on spectral characteristics of produced speech. There are not many studies on the spectral characteristics of speech because of complicated processing of too much spectral data. The purpose of this study was to examine spectral characteristics and formant bandwidths of English vowels produced by nine American males with different speaking styles: clear or conversational styles; high- or low-pitched voices. Praat was used to collect pitch-corrected long-term averaged spectra and bandwidths of the first two formants of eleven vowels in the speaking styles. Results showed that the spectral characteristics of the vowels varied systematically according to the speaking styles. The clear speech showed higher spectral energy of the vowels than that of the conversational speech while the high-pitched voice did the same over the low-pitched voice. In addition, front and back vowel groups showed different spectral characteristics. Secondly, there was no statistically significant difference between B1 and B2 in the speaking styles. B1 was generally lower than B2 when reflecting the source spectrum and radiation effect. However, there was a statistically significant difference in B2 between the front and back vowel groups. The author concluded that spectral characteristics reflect speaking styles systematically while bandwidths measured at a few formant frequency points do not reveal style differences properly. Further studies would be desirable to examine how people would evaluate different sets of synthetic vowels with spectral characteristics or with bandwidths modified.

Effect of Glottal Wave Shape on the Vowel Phoneme Synthesis (성문파형이 모음음소합성에 미치는 영향)

  • 안점영;김명기
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.10 no.4
    • /
    • pp.159-167
    • /
    • 1985
  • It was demonstrated that the glottal waves are different depending on a kind of vowels in deriving the glottal waves directly from Korean vowels/a, e, I, o, u/ w, ch are recorded by a male speaker. After resynthesizing vowels with five simulated glottal waves, the effects of glottal wave shape on the speech synthesis were compared with in terms of waveform. Some changes could be seen in the waveforms of the synthetic vowels with the variation of the shape, opening time and closing time, therefore it was confirmed that in the speech sysnthesis, the glottal wave shape is an important factor in the improvement of the speech quality.

  • PDF

A Study on the Text-to-Speech Conversion Using the Formant Synthesis Method (포만트 합성방식을 이용한 문자-음성 변환에 관한 연구)

  • Choi, Jin-San;Kim, Yin-Nyun;See, Jeong-Wook;Bae, Geun-Sune
    • Speech Sciences
    • /
    • v.2
    • /
    • pp.9-23
    • /
    • 1997
  • Through iterative analysis and synthesis experiments on Korean monosyllables, the Korean text-to-speech system was implemented using the phoneme-based formant synthesis method. Since the formants of initial and final consonants in this system showed many variations depending on the medial vowels, the database for each phoneme was made up of formants depending on the medial vowels as well as duration information of transition region. These techniques were needed to improve the intelligibility of synthetic speech. This paper investigates also methods of concatenating the synthesis units to improve the quality of synthetic speech.

  • PDF

Synchronizationof Synthetic Facial Image Sequences and Synthetic Speech for Virtual Reality (가상현실을 위한 합성얼굴 동영상과 합성음성의 동기구현)

  • 최장석;이기영
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.7
    • /
    • pp.95-102
    • /
    • 1998
  • This paper proposes a synchronization method of synthetic facial iamge sequences and synthetic speech. The LP-PSOLA synthesizes the speech for each demi-syllable. We provide the 3,040 demi-syllables for unlimited synthesis of the Korean speech. For synthesis of the Facial image sequences, the paper defines the total 11 fundermental patterns for the lip shapes of the Korean consonants and vowels. The fundermental lip shapes allow us to pronounce all Korean sentences. Image synthesis method assigns the fundermental lip shapes to the key frames according to the initial, the middle and the final sound of each syllable in korean input text. The method interpolates the naturally changing lip shapes in inbetween frames. The number of the inbetween frames is estimated from the duration time of each syllable of the synthetic speech. The estimation accomplishes synchronization of the facial image sequences and speech. In speech synthesis, disk memory is required to store 3,040 demi-syllable. In synthesis of the facial image sequences, however, the disk memory is required to store only one image, because all frames are synthesized from the neutral face. Above method realizes synchronization of system which can real the Korean sentences with the synthetic speech and the synthetic facial iage sequences.

  • PDF

Factors Affecting Changes in English from a Synthetic Language to an Analytic One

  • Hyun, Wan-Song
    • English Language & Literature Teaching
    • /
    • v.13 no.2
    • /
    • pp.47-61
    • /
    • 2007
  • The purpose of this paper is to survey the major elements that have changed English from a synthetic language to an analytic one. Therefore, this paper has looked at the differences between synthetic languages and analytic ones. In synthetic languages, the relation of words in a sentence is synthetically determined by means of inflections, while in analytic languages, the functions of words in a sentence are analytically determined by means of word order and function words. Thus, Old English with full inflectional systems shows the synthetic nature. However, in the course of time, Old English inflections came to be lost by phonetic changes and operation, which made English dependent on word order and function words to signal the relation of words in a sentence. The major phonetic changes that have shifted English are the change of final /m/ to /n/, the leveling of unstressed vowels, the loss of final /n/, and the decay of schwa in final syllables. These changes led to reduction of inflections of English as well as the loss of grammatical gender. The operation of analogy, the tendency of language to follow certain patterns and to adapt a less common form to a more familiar one, has also played an important role in changing English.

  • PDF

Voice quality transform using jitter synthesis (Jitter 합성에 의한 음질변환에 관한 연구)

  • Jo, Cheolwoo
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.121-125
    • /
    • 2018
  • This paper describes procedures of changing and measuring voice quality in terms of jitter. Jitter synthesis method was applied to the TD-PSOLA analysis system of the Praat software. The jitter component is synthesized based on a Gaussian random noise model. The TD-PSOLA re-synthesize process is used to synthesize the modified voice with artificial jitter. Various vocal jitter parameters are used to measure the change in quality caused by artificial systematic jitter change. Synthetic vowels, natural vowels and short sentences are used to check the change in voice quality through the synthesizer model. The results shows that the suggested method is useful for voice quality control in a limited way and can be used to alter the jitter component of voice.

Speech Synthesis Based on CVC Speech Segments Extracted from Continuous Speech (연속 음성으로부터 추출한 CVC 음성세그먼트 기반의 음성합성)

  • 김재홍;조관선;이철희
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.7
    • /
    • pp.10-16
    • /
    • 1999
  • In this paper, we propose a concatenation-based speech synthesizer using CVC(consonant-vowel-consonant) speech segments extracted from an undesigned continuous speech corpus. Natural synthetic speech can be generated by a proper modelling of coarticulation effects between phonemes and the use of natural prosodic variations. In general, CVC synthesis unit shows smaller acoustic degradation of speech quality since concatenation points are located in the consonant region and it can properly model the coarticulation of vowels that are effected by surrounding consonants. In this paper, we analyze the characteristics and the number of required synthesis units of 4 types of speech synthesis methods that use CVC synthesis units. Furthermore, we compare the speech quality of the 4 types and propose a new synthesis method based on the most promising type in terms of speech quality and implementability. Then we implement the method using the speech corpus and synthesize various examples. The CVC speech segments that are not in the speech corpus are substituted by demonstrate speech segments. Experiments demonstrate that CVC speech segments extracted from about 100 Mbytes continuous speech corpus can produce high quality synthetic speech.

  • PDF