• Title/Summary/Keyword: Phoneme

Search Result 458, Processing Time 0.02 seconds

A study of reciting the formal poetries of Korea and French in digital era - Shijo(Korean verse) vs Sonnet (French) (콘텐츠를 위한 한ㆍ불 정형시가 낭송법의 비교 고찰)

  • 이산호
    • Sijohaknonchong
    • /
    • v.19 no.1
    • /
    • pp.85-106
    • /
    • 2003
  • Recently, the sonnet and the shijo, each representing French and Korean formal poetries, are tend to be read with the eyes only, as were more accustomed to written literature. But even after almost three millennia of written literature and increased use of digitalized poems, poetry retains its appeal to the ear as well as to the eye. To read a poem only by eyes might be wrong because it is designed to be read aloud by mouth and understood by ear, and will decrease the aesthetic sense otherwise. It is essential to find the right way to recite a poem in this dramatically changed society, and is especially important when many shijos are changing into digitalized forms to adapt the new wave of our society. The sonnet and the shijo emphasize the importance of the harmony of sounds and rhythms with certain structure, and have their own prosodies. The emotions of the speaker in poems are expressed with words. When they are pronounced. each phoneme has its own phonemic characteristics. When comparing the The Broken Bell(Baudelaire) and Chopoong ga (Jong Seo Kim) in terms of prosody and phonetics. the speakers emotions are closely related with the phonetic structure of each word. In The Broken Bell, the phonetic value of rhymes, repeated phonemes, concentration of front and back vowels. rhythms of onesyllable words shape the overall image of this poem describing the productivity of bells as appose to the sterility of the soul. Chopoong ga also shows the determined and strong will of the speaker by frequent glottalized sounds. distribution and concentration of certain vowels. and frequent use of plosives. As you see in these examples, phones, beats, and rhythms are not the mere transmitter of meaning but possess their expressive values of their own and should be the first to be considered when reciting a poem.

  • PDF

A Study on the Continuous Speech Recognition for the Automatic Creation of International Phonetics (국제 음소의 자동 생성을 활용한 연속음성인식에 관한 연구)

  • Kim, Suk-Dong;Hong, Seong-Soo;Shin, Chwa-Cheul;Woo, In-Sung;Kang, Heung-Soon
    • Journal of Korea Game Society
    • /
    • v.7 no.2
    • /
    • pp.83-90
    • /
    • 2007
  • One result of the trend towards globalization is an increased number of projects that focus on natural language processing. Automatic speech recognition (ASR) technologies, for example, hold great promise in facilitating global communications and collaborations. Unfortunately, to date, most research projects focus on single widely spoken languages. Therefore, the cost to adapt a particular ASR tool for use with other languages is often prohibitive. This work takes a more general approach. We propose an International Phoneticizing Engine (IPE) that interprets input files supplied in our Phonetic Language Identity (PLI) format to build a dictionary. IPE is language independent and rule based. It operates by decomposing the dictionary creation process into a set of well-defined steps. These steps reduce rule conflicts, allow for rule creation by people without linguistics training, and optimize run-time efficiency. Dictionaries created by the IPE can be used with the speech recognition system. IPE defines an easy-to-use systematic approach that can obtained 92.55% for the recognition rate of Korean speech and 89.93% for English.

  • PDF

Visualization of Korean Speech Based on the Distance of Acoustic Features (음성특징의 거리에 기반한 한국어 발음의 시각화)

  • Pok, Gou-Chol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.3
    • /
    • pp.197-205
    • /
    • 2020
  • Korean language has the characteristics that the pronunciation of phoneme units such as vowels and consonants are fixed and the pronunciation associated with a notation does not change, so that foreign learners can approach rather easily Korean language. However, when one pronounces words, phrases, or sentences, the pronunciation changes in a manner of a wide variation and complexity at the boundaries of syllables, and the association of notation and pronunciation does not hold any more. Consequently, it is very difficult for foreign learners to study Korean standard pronunciations. Despite these difficulties, it is believed that systematic analysis of pronunciation errors for Korean words is possible according to the advantageous observations that the relationship between Korean notations and pronunciations can be described as a set of firm rules without exceptions unlike other languages including English. In this paper, we propose a visualization framework which shows the differences between standard pronunciations and erratic ones as quantitative measures on the computer screen. Previous researches only show color representation and 3D graphics of speech properties, or an animated view of changing shapes of lips and mouth cavity. Moreover, the features used in the analysis are only point data such as the average of a speech range. In this study, we propose a method which can directly use the time-series data instead of using summary or distorted data. This was realized by using the deep learning-based technique which combines Self-organizing map, variational autoencoder model, and Markov model, and we achieved a superior performance enhancement compared to the method using the point-based data.

A Korean menu-ordering sentence text-to-speech system using conformer-based FastSpeech2 (콘포머 기반 FastSpeech2를 이용한 한국어 음식 주문 문장 음성합성기)

  • Choi, Yerin;Jang, JaeHoo;Koo, Myoung-Wan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.359-366
    • /
    • 2022
  • In this paper, we present the Korean menu-ordering Sentence Text-to-Speech (TTS) system using conformer-based FastSpeech2. Conformer is the convolution-augmented transformer, which was originally proposed in Speech Recognition. Combining two different structures, the Conformer extracts better local and global features. It comprises two half Feed Forward module at the front and the end, sandwiching the Multi-Head Self-Attention module and Convolution module. We introduce the Conformer in Korean TTS, as we know it works well in Korean Speech Recognition. For comparison between transformer-based TTS model and Conformer-based one, we train FastSpeech2 and Conformer-based FastSpeech2. We collected a phoneme-balanced data set and used this for training our models. This corpus comprises not only general conversation, but also menu-ordering conversation consisting mainly of loanwords. This data set is the solution to the current Korean TTS model's degradation in loanwords. As a result of generating a synthesized sound using ParallelWave Gan, the Conformer-based FastSpeech2 achieved superior performance of MOS 4.04. We confirm that the model performance improved when the same structure was changed from transformer to Conformer in the Korean TTS.

A Pre-Selection of Candidate Units Using Accentual Characteristic In a Unit Selection Based Japanese TTS System (일본어 악센트 특징을 이용한 합성단위 선택 기반 일본어 TTS의 후보 합성단위의 사전선택 방법)

  • Na, Deok-Su;Min, So-Yeon;Lee, Kwang-Hyoung;Lee, Jong-Seok;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.4
    • /
    • pp.159-165
    • /
    • 2007
  • In this paper, we propose a new pre-selection of candidate units that is suitable for the unit selection based Japanese TTS system. General pre-selection method performed by calculating a context-dependent cost within IP (Intonation Phrase). Different from other languages, however. Japanese has an accent represented as the height of a relative pitch, and several words form a single accentual phrase. Also. the prosody in Japanese changes in accentual phrase units. By reflecting such prosodic change in pre-selection. the qualify of synthesized speech can be improved. Furthermore, by calculating a context-dependent cost within accentual phrase, synthesis speed can be improved than calculating within intonation phrase. The proposed method defines AP. analyzes AP in context and performs pre-selection using accentual phrase matching which calculates CCL (connected context length) of the Phoneme's candidates that should be synthesized in each accentual phrase. The baseline system used in the proposed method is VoiceText, which is a synthesizer of Voiceware. Evaluations were made on perceptual error (intonation error, concatenation mismatch error) and synthesis time. Experimental result showed that the proposed method improved the qualify of synthesized speech. as well as shortened the synthesis time.

Speech Recognition Using Linear Discriminant Analysis and Common Vector Extraction (선형 판별분석과 공통벡터 추출방법을 이용한 음성인식)

  • 남명우;노승용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4
    • /
    • pp.35-41
    • /
    • 2001
  • This paper describes Linear Discriminant Analysis and common vector extraction for speech recognition. Voice signal contains psychological and physiological properties of the speaker as well as dialect differences, acoustical environment effects, and phase differences. For these reasons, the same word spelled out by different speakers can be very different heard. This property of speech signal make it very difficult to extract common properties in the same speech class (word or phoneme). Linear algebra method like BT (Karhunen-Loeve Transformation) is generally used for common properties extraction In the speech signals, but common vector extraction which is suggested by M. Bilginer et at. is used in this paper. The method of M. Bilginer et al. extracts the optimized common vector from the speech signals used for training. And it has 100% recognition accuracy in the trained data which is used for common vector extraction. In spite of these characteristics, the method has some drawback-we cannot use numbers of speech signal for training and the discriminant information among common vectors is not defined. This paper suggests advanced method which can reduce error rate by maximizing the discriminant information among common vectors. And novel method to normalize the size of common vector also added. The result shows improved performance of algorithm and better recognition accuracy of 2% than conventional method.

  • PDF

STANDARDIZATION OF WORD/NONWORD READING TEST AND LETTER-SYMBOL DISCRIMINATION TASK FOR THE DIAGNOSIS OF DEVELOPMENTAL READING DISABILITY (발달성 읽기 장애 진단을 위한 단어/비단어 읽기 검사와 글자기호감별검사의 표준화 연구)

  • Cho, Soo-Churl;Lee, Jung-Bun;Chungh, Dong-Seon;Shin, Sung-Woong
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.14 no.1
    • /
    • pp.81-94
    • /
    • 2003
  • Objectives:Developmental reading disorder is a condition which manifests significant developmenttal delay in reading ability or persistent errors. About 3-7% of school-age children have this condition. The purpose of the present study was to validate the diagnostic values of Word/Nonword Reading Test and Letter-Symbol Discrimination Task for the purpose of overcoming the caveats of Basic Learning Skills Test. Methods:Sixty-three reading-disordered patients(mean age 10.48 years old) and sex, age-matched 77 normal children(mean age 10.33 years old) were selected by clinical evaluation and DSM-IV criteria. Reading I and II of Basic Learning Skills Test, Word/Nonword Reading Test, and Letter-Symbol Discrimination Task were carried out to them. Word/Nonword Reading Test:One hundred usual highfrequency words and one hundred meaningless nonwords were presented to the subjects within 1.2 and 2.4 seconds, respectively. Through these results, automatized phonological processing ability and conscious letter-sound matching ability were estimated. Letter-Symbol Discrimination Task:mirror image letters which reading-disordered patients are apt to confuse were used. Reliability, concurrent validity, construct validity, and discriminant validity tests were conducted. Results:Word/Nonword Reading Test:the reliability(alpha) was 0.96, and concurrent validity with Basic Learning Skills test was 0.94. The patients with developmental reading disorders differed significantly from normal children in Word/Nonword Reading Test performances. Through discriminant analysis, 83.0% of original cases were correctly classified by this test. Letter-Symbol Discrimination Task:the reliability(alpha) was 0.86, and concurrent validity with Basic Learning Skills test was 0.86. There were significant differences in scores between the patients and normal children. Factor analysis revealed that this test were composed of saccadic mirror image processing, global accuracy, mirror image processing deficit, static image processing, global vigilance deficit, and inattention-impulsivity factors. By discriminant analysis, 87.3% of the patients and normal children were correctly classified. Conclusion:The patients with developmental reading disorders had deficits in automatized visuallexical route, morpheme-phoneme conversion mechanism, and visual information processing. These deficits were reliably and validly evaluated by Word/Nonword Reading Test and Letter-Symbol Discrimination Task.

  • PDF

A quantitative study on the minimal pair of Korean phonemes: Focused on syllable-initial consonants (한국어 음소 최소대립쌍의 계량언어학적 연구: 초성 자음을 중심으로)

  • Jung, Jieun
    • Phonetics and Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.29-40
    • /
    • 2019
  • The paper investigates the minimal pair of Korean phonemes quantitatively. To achieve this goal, I calculated the number of consonant minimal pairs in the syllable-initial position as both raw counts and relative counts, and analyzed the part of speech relations of the two words in the minimal pair. "Urimalsaem" was chosen as the object of this study because it was judged that the minimal pair analysis should be done through a dictionary and it is the largest among Korean dictionaries. The results of the study are summarized as follows. First, there were 153 types of minimal pairs out of 337,135 examples. The ranking of phoneme pairs from highest to lowest was 'ㅅ-ㅈ, ㄱ-ㅅ, ㄱ-ㅈ, ㄱ-ㅂ, ㄱ-ㅎ, ${\ldots}$, ㅆ-ㅋ, ㄸ-ㅋ, ㅉ-ㅋ, ㄹ-ㅃ, ㅃ-ㅋ'. The phonemes that played a major role in the formation of the minimal pair were /ㄱ, ㅅ, ㅈ, ㅂ, ㅊ/, in that order, which showed a high proportion of palatals. The correlation between the raw count of minimal pairs and the relative count of minimal pairs was found to be quite high r=0.937. Second, 87.91% of the minimal pairs shared the part of speech (same syntactic category). The most frequently observed type has been 'noun-noun' pair (70.25%), and 'vowel-vowel' pair (14.77%) was the next ranking. It can be indicated that the minimal pair could be grouped into similar categories in terms of semantics. The results of this study can be useful for various research in Korean linguistics, speech-language pathology, language education, language acquisition, speech synthesis, and artificial intelligence-machine learning as basic data related to Korean phonemes.