• Title/Summary/Keyword: synthetic vowel

Search Result 10, Processing Time 0.021 seconds

Perception Ability of Synthetic Vowels in Cochlear Implanted Children (모음의 포먼트 변형에 따른 인공와우 이식 아동의 청각적 인지변화)

  • Huh, Myung-Jin
    • MALSORI
    • /
    • no.64
    • /
    • pp.1-14
    • /
    • 2007
  • The purpose of this study was to examine the acoustic perception different by formants change for profoundly hearing impaired children with cochlear implants. The subjects were 10 children after 15 months of experience with the implant and mean of their chronological age was 8.4 years and Standard deviation was 2.9 years. The ability of auditory perception was assessed using acoustic-synthetic vowels. The acoustic-synthetic vowel was combined with F1, F2, and F3 into a vowel and produced 42 synthetic sound, using Speech GUI(Graphic User Interface) program. The data was deal with clustering analysis and on-line analytical processing for perception ability of acoustic synthetic vowel. The results showed that auditory perception scores of acoustic-synthetic vowels for cochlear implanted children were increased in F2 synthetic vowels compaire to those of F1. And it was found that they perceived the differences of vowels in terms of distance rates between F1 and F2 in specific vowel.

  • PDF

A Link between Perceived and Produced Vowel Spaces of Korean Learners of English (한국인 영어학습자의 지각 모음공간과 발화 모음공간의 연계)

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.6 no.3
    • /
    • pp.81-89
    • /
    • 2014
  • Korean English learners tend to have difficulty perceiving and producing English vowels. The purpose of this study is to examine a link between perceived and produced vowel spaces of Korean learners of English. Sixteen Korean male and female participants perceived two sets of English synthetic vowels on a computer monitor and rated their naturalness. The same participants produced English vowels in a carrier sentence with high and low pitch variation in a clear speaking mode. The author compared the perceived and produced vowel spaces in terms of the pitch and gender variables. Results showed that the perceived vowel spaces were not significantly different in either variables. Korean learners perceived the vowels similarly. They did not differentiate the tense-lax vowel pairs nor the low vowels. Secondly, the produced vowel spaces of the male and female groups showed a 25% difference which may have come from their physiological differences in the vocal tract length. Thirdly, the comparison of the perceived and produced vowel spaces revealed that although the vowel space patterns of the Korean male and female learners appeared similar, which may lead to a relative link between perception and production, statistical differences existed in some vowels because of the acoustical properties of the synthetic vowels, which may lead to an independent link. The author concluded that any comparison between the perceived and produced vowel space of nonnative speakers should be made cautiously. Further studies would be desirable to examine how Koreans would perceive different sets of synthetic vowels.

Effects of F1/F2 Manipulation on the Perception of Korean Vowels /o/ and /u/ (F1/F2의 변화가 한국어 /오/, /우/ 모음의 지각판별에 미치는 영향)

  • Yun, Jihyeon;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.5 no.3
    • /
    • pp.39-46
    • /
    • 2013
  • This study examined the perception of two Korean vowels using F1/F2 manipulated synthetic vowels. Previous studies indicated that there is an overlap between the acoustic spaces of Korean /o/ and /u/ in terms of the first two formants. A continuum of eleven synthetic vowels were used as stimuli. The experiment consisted of three tasks: an /o/ identification task (Yes-no), an /u/ identification task (Yes-no), and a forced choice identification task (/o/-/u/). ROC(Receiver Operating Characteristic) analysis and logistic regression were performed to calculate the boundary criterion of the two vowels along the stimulus continuum, and to predict the perceptual judgment on F1 and F2. The result indicated that the location between stimulus no.5 (F1 = 342Hz, F2 = 691Hz) and no.6 (F1 = 336Hz, F2 = 700Hz) was estimated as a perceptual boundary region between /o/ and /u/, while stimulus no.0 (F1=405Hz, F2=666Hz) and no.10 (F1=321Hz, F2=743Hz) were at opposite ends of the continuum. The influence of F2 was predominant over F1 on the perception of the vowel categories.

A Study on the Korean Text-to-Speech Using Demisyllable Units (반음절단위를 이용한 한국어 음성합성에 관한 연구)

  • Yun, Gi-Sun;Park, Sung-Han
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.10
    • /
    • pp.138-145
    • /
    • 1990
  • This paper present a rule-based speech synthesis method for improving the naturalness of synthetic speech and using the small data base based on demisyllable units. A 12-pole Linear Prediction Coding method is used to analyses demisyllable speech signals. A syllable and vowel concatenation rule is developed to improve the naturalness and intelligibility of the synthetic speech. in addiion, phonological structure transform rule using neural net and prosody rules are applied to the synthetic speech.

  • PDF

Author Notations Based on the Structure of the Author Table for Korean Libraries (구조론에 입각한 한국 저자기호표 연구 -한글의 구조상의 특색, 기입의 형식, 배열, 표기법 문제 등과 관련한 고찰-)

  • Lee Jai-chul
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.1
    • /
    • pp.1-58
    • /
    • 1970
  • As to the structure of author tables for the Korean libraries using the Korean alphabet Hangul for filing, no other system is understood more relevant to author notations than the analytico-synthetic system. The Korean character consists of syllables respectively dividual into 'consonant+vowel' or 'consonant+vowel+consonant,' with the first element a consonant and the second a vowel. When these elements are synthesized with figure representation, they make an enumerative two-figure table. Individualizing and assigning, therefore, are done without listing many en-tries on the table or looking up notations in ready-made enumerative author tables. We still do not have general agreements in form of entry, reading of the Japanese and Chinese names, transliteration of foreign words, and filing system. What is more, so flexible and hospitable a notation system should be adopted as to meet the anticipated changes. The writer introduces an author notation system that could make 150,000 divisions by combinating figures, thus making it possible to endure changes through readjustments. It is considered effective, convenient, and efficacious for individualization.

  • PDF

Spectral Characteristics and Formant Bandwidths of English Vowels by American Males with Different Speaking Styles (발화방식에 따른 미국인 남성 영어모음의 스펙트럼 특성과 포먼트 대역)

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.6 no.4
    • /
    • pp.91-99
    • /
    • 2014
  • Speaking styles tend to have an influence on spectral characteristics of produced speech. There are not many studies on the spectral characteristics of speech because of complicated processing of too much spectral data. The purpose of this study was to examine spectral characteristics and formant bandwidths of English vowels produced by nine American males with different speaking styles: clear or conversational styles; high- or low-pitched voices. Praat was used to collect pitch-corrected long-term averaged spectra and bandwidths of the first two formants of eleven vowels in the speaking styles. Results showed that the spectral characteristics of the vowels varied systematically according to the speaking styles. The clear speech showed higher spectral energy of the vowels than that of the conversational speech while the high-pitched voice did the same over the low-pitched voice. In addition, front and back vowel groups showed different spectral characteristics. Secondly, there was no statistically significant difference between B1 and B2 in the speaking styles. B1 was generally lower than B2 when reflecting the source spectrum and radiation effect. However, there was a statistically significant difference in B2 between the front and back vowel groups. The author concluded that spectral characteristics reflect speaking styles systematically while bandwidths measured at a few formant frequency points do not reveal style differences properly. Further studies would be desirable to examine how people would evaluate different sets of synthetic vowels with spectral characteristics or with bandwidths modified.

Effect of Glottal Wave Shape on the Vowel Phoneme Synthesis (성문파형이 모음음소합성에 미치는 영향)

  • 안점영;김명기
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.10 no.4
    • /
    • pp.159-167
    • /
    • 1985
  • It was demonstrated that the glottal waves are different depending on a kind of vowels in deriving the glottal waves directly from Korean vowels/a, e, I, o, u/ w, ch are recorded by a male speaker. After resynthesizing vowels with five simulated glottal waves, the effects of glottal wave shape on the speech synthesis were compared with in terms of waveform. Some changes could be seen in the waveforms of the synthetic vowels with the variation of the shape, opening time and closing time, therefore it was confirmed that in the speech sysnthesis, the glottal wave shape is an important factor in the improvement of the speech quality.

  • PDF

The Error Pattern Analysis of the HMM-Based Automatic Phoneme Segmentation (HMM기반 자동음소분할기의 음소분할 오류 유형 분석)

  • Kim Min-Je;Lee Jung-Chul;Kim Jong-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.5
    • /
    • pp.213-221
    • /
    • 2006
  • Phone segmentation of speech waveform is especially important for concatenative text to speech synthesis which uses segmented corpora for the construction of synthetic units. because the quality of synthesized speech depends critically on the accuracy of the segmentation. In the beginning. the phone segmentation was manually performed. but it brings the huge effort and the large time delay. HMM-based approaches adopted from automatic speech recognition are most widely used for automatic segmentation in speech synthesis, providing a consistent and accurate phone labeling scheme. Even the HMM-based approach has been successful, it may locate a phone boundary at a different position than expected. In this paper. we categorized adjacent phoneme pairs and analyzed the mismatches between hand-labeled transcriptions and HMM-based labels. Then we described the dominant error patterns that must be improved for the speech synthesis. For the experiment. hand labeled standard Korean speech DB from ETRI was used as a reference DB. Time difference larger than 20ms between hand-labeled phoneme boundary and auto-aligned boundary is treated as an automatic segmentation error. Our experimental results from female speaker revealed that plosive-vowel, affricate-vowel and vowel-liquid pairs showed high accuracies, 99%, 99.5% and 99% respectively. But stop-nasal, stop-liquid and nasal-liquid pairs showed very low accuracies, 45%, 50% and 55%. And these from male speaker revealed similar tendency.

Speech Synthesis Based on CVC Speech Segments Extracted from Continuous Speech (연속 음성으로부터 추출한 CVC 음성세그먼트 기반의 음성합성)

  • 김재홍;조관선;이철희
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.7
    • /
    • pp.10-16
    • /
    • 1999
  • In this paper, we propose a concatenation-based speech synthesizer using CVC(consonant-vowel-consonant) speech segments extracted from an undesigned continuous speech corpus. Natural synthetic speech can be generated by a proper modelling of coarticulation effects between phonemes and the use of natural prosodic variations. In general, CVC synthesis unit shows smaller acoustic degradation of speech quality since concatenation points are located in the consonant region and it can properly model the coarticulation of vowels that are effected by surrounding consonants. In this paper, we analyze the characteristics and the number of required synthesis units of 4 types of speech synthesis methods that use CVC synthesis units. Furthermore, we compare the speech quality of the 4 types and propose a new synthesis method based on the most promising type in terms of speech quality and implementability. Then we implement the method using the speech corpus and synthesize various examples. The CVC speech segments that are not in the speech corpus are substituted by demonstrate speech segments. Experiments demonstrate that CVC speech segments extracted from about 100 Mbytes continuous speech corpus can produce high quality synthetic speech.

  • PDF

A comparison of CPP analysis among breathiness ranks (기식 등급에 따른 CPP (Cepstral Peak Prominence) 분석 비교)

  • Kang, Youngae;Koo, Bonseok;Jo, Cheolwoo
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.21-26
    • /
    • 2015
  • The aim of this study is to synthesize pathological breathy voice and to make a cepstral peak prominence (CPP) table following breathiness ranks by cepstral analysis to supplement reliability of the perceptual auditory judgment task. KlattGrid synthesizer included in Praat was used. Synthesis parameters consist of two groups, i.e., constants and variables. Constant parameters are pitch, amplitude, flutter, open phase, oral formant and bandwidth. Variable parameters are breathiness (BR), aspiration amplitude (AH), and spectral tilt (TL). Five hundred sixty samples of synthetic breathy vowel /a/ for male were created. Three raters participated in ranking of the breathiness. 217 were proved to be inadequate samples from perceptual judgment and cepstral analysis. Finally, 343 samples were selected. These CPP values and other related parameters from cepstral analysis are classified under four breathiness ranks (B0~B3). The mean and standard deviation of CPP is $16.10{\pm}1.15$ dB(B0), $13.68{\pm}1.34$ dB(B1), $10.97{\pm}1.41$ dB(B2), and $3.03{\pm}4.07$ dB(B3). The value of CPP decreases toward the severe group of breathiness because there is a lot of noise and a small quantity of harmonics.