• Title/Summary/Keyword: Syllabic segmentation

Search Result 8, Processing Time 0.024 seconds

Sentence design for speech recognition database

  • Zu Yiqing
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.472-472
    • /
    • 1996
  • The material of database for speech recognition should include phonetic phenomena as much as possible. At the same time, such material should be phonetically compact with low redundancy[1, 2]. The phonetic phenomena in continuous speech is the key problem in speech recognition. This paper describes the processing of a set of sentences collected from the database of 1993 and 1994 "People's Daily"(Chinese newspaper) which consist of news, politics, economics, arts, sports etc.. In those sentences, both phonetic phenometla and sentence patterns are included. In continuous speech, phonemes always appear in the form of allophones which result in the co-articulary effects. The task of designing a speech database should be concerned with both intra-syllabic and inter-syllabic allophone structures. In our experiments, there are 404 syllables, 415 inter-syllabic diphones, 3050 merged inter-syllabic triphones and 2161 merged final-initial structures in read speech. Statistics on the database from "People's Daily" gives and evaluation to all of the possible phonetic structures. In this sentence set, we first consider the phonetic balances among syllables, inter-syllabic diphones, inter-syllabic triphones and semi-syllables with their junctures. The syllabic balances ensure the intra-syllabic phenomena such as phonemes, initial/final and consonant/vowel. the rest describes the inter-syllabic jucture. The 1560 sentences consist of 96% syllables without tones(the absent syllables are only used in spoken language), 100% inter-syllabic diphones, 67% inter-syllabic triphones(87% of which appears in Peoples' Daily). There are rougWy 17 kinds of sentence patterns which appear in our sentence set. By taking the transitions between syllables into account, the Chinese speech recognition systems have gotten significantly high recognition rates[3, 4]. The following figure shows the process of collecting sentences. [people's Daily Database] -> [segmentation of sentences] -> [segmentation of word group] -> [translate the text in to Pin Yin] -> [statistic phonetic phenomena & select useful paragraph] -> [modify the selected sentences by hand] -> [phonetic compact sentence set]

  • PDF

A Syllabic Segmentation Method for the Korean Continuous Speech (우리말 연속음성의 음절 분할법)

  • 한학용;고시영;허강인
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.70-75
    • /
    • 2001
  • This paper proposes a syllabic segmentation method for the korean continuous speech. This method are formed three major steps as follows. (1) labeling the vowel, consonants, silence units and forming the Token the sequence of speech data using the segmental parameter in the time domain, pitch, energy, ZCR and PVR. (2) scanning the Token in the structure of korean syllable using the parser designed by the finite state automata, and (3) re-segmenting the syllable parts witch have two or more syllables using the pseudo-syllable nucleus information. Experimental results for the capability evaluation toward the proposed method regarding to the continuous words and sentence units are 73.5%, 85.9%, respectively.

  • PDF

Utilization of Syllabic Nuclei Location in Korean Speech Segmentation into Phonemic Units (음절핵의 위치정보를 이용한 우리말의 음소경계 추출)

  • 신옥근
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.5
    • /
    • pp.13-19
    • /
    • 2000
  • The blind segmentation method, which segments input speech data into recognition unit without any prior knowledge, plays an important role in continuous speech recognition system and corpus generation. As no prior knowledge is required, this method is rather simple to implement, but in general, it suffers from bad performance when compared to the knowledge-based segmentation method. In this paper, we introduce a method to improve the performance of a blind segmentation of Korean continuous speech by postprocessing the segment boundaries obtained from the blind segmentation. In the preprocessing stage, the candidate boundaries are extracted by a clustering technique based on the GLR(generalized likelihood ratio) distance measure. In the postprocessing stage, the final phoneme boundaries are selected from the candidates by utilizing a simple a priori knowledge on the syllabic structure of Korean, i.e., the maximum number of phonemes between any consecutive nuclei is limited. The experimental result was rather promising : the proposed method yields 25% reduction of insertion error rate compared that of the blind segmentation alone.

  • PDF

Segmentation of continuous Korean Speech Based on Boundaries of Voiced and Unvoiced Sounds (유성음과 무성음의 경계를 이용한 연속 음성의 세그먼테이션)

  • Yu, Gang-Ju;Sin, Uk-Geun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.7
    • /
    • pp.2246-2253
    • /
    • 2000
  • In this paper, we show that one can enhance the performance of blind segmentation of phoneme boundaries by adopting the knowledge of Korean syllabic structure and the regions of voiced/unvoiced sounds. eh proposed method consists of three processes : the process to extract candidate phoneme boundaries, the process to detect boundaries of voiced/unvoiced sounds, and the process to select final phoneme boundaries. The candidate phoneme boudaries are extracted by clustering method based on similarity between two adjacent clusters. The employed similarity measure in this a process is the ratio of the probability density of adjacent clusters. To detect he boundaries of voiced/unvoiced sounds, we first compute the power density spectrum of speech signal in 0∼400 Hz frequency band. Then the points where this paper density spectrum variation is greater than the threshold are chosen as the boundaries of voiced/unvoiced sounds. The final phoneme boundaries consist of all the candidate phoneme boundaries in voiced region and limited number of candidate phoneme boundaries in unvoiced region. The experimental result showed about 40% decrease of insertion rate compared to the blind segmentation method we adopted.

  • PDF

Study on the Neural Network for Handwritten Hangul Syllabic Character Recognition (수정된 Neocognitron을 사용한 필기체 한글인식)

  • 김은진;백종현
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.1
    • /
    • pp.61-78
    • /
    • 1991
  • This paper descibes the study of application of a modified Neocognitron model with backward path for the recognition of Hangul(Korean) syllabic characters. In this original report, Fukushima demonstrated that Neocognitron can recognize hand written numerical characters of $19{\times}19$ size. This version accepts $61{\times}61$ images of handwritten Hangul syllabic characters or a part thereof with a mouse or with a scanner. It consists of an input layer and 3 pairs of Uc layers. The last Uc layer of this version, recognition layer, consists of 24 planes of $5{\times}5$ cells which tell us the identity of a grapheme receiving attention at one time and its relative position in the input layer respectively. It has been trained 10 simple vowel graphemes and 14 simple consonant graphemes and their spatial features. Some patterns which are not easily trained have been trained more extrensively. The trained nerwork which can classify indivisual graphemes with possible deformation, noise, size variance, transformation or retation wre then used to recongnize Korean syllabic characters using its selective attention mechanism for image segmentation task within a syllabic characters. On initial sample tests on input characters our model could recognize correctly up to 79%of the various test patterns of handwritten Korean syllabic charactes. The results of this study indeed show Neocognitron as a powerful model to reconginze deformed handwritten charavters with big size characters set via segmenting its input images as recognizable parts. The same approach may be applied to the recogition of chinese characters, which are much complex both in its structures and its graphemes. But processing time appears to be the bottleneck before it can be implemented. Special hardware such as neural chip appear to be an essestial prerquisite for the practical use of the model. Further work is required before enabling the model to recognize Korean syllabic characters consisting of complex vowels and complex consonants. Correct recognition of the neighboring area between two simple graphemes would become more critical for this task.

Phonological Awareness in Korean-English Bilingual Children (한국어-영어 이중언어사용아동의 음운인식능력)

  • Park, Min-Young;Koh, Do-Heung;Lee, Yoon-Kyoung
    • Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.35-46
    • /
    • 2006
  • This study investigated whether there are differences between Korean-English bilingual and Korean monolingual children on phonological awareness skills. Participants were 11 Korean-English bilingual children and 12 Korean monolingual children. The children's ages ranged between 6 and 7 years. The results were as follows. First, the bilingual children significantly outperformed monolingual children on overall phonological awareness tasks. The bilinguals performed significantly higher than monolinguals on all three types of phonological awareness tasks (segmentation, deletion, and blending). Second, there was a significant difference between the groups with respect to phonological units of the tasks. The bilinguals performed significantly better than monolinguals on the phonemic unit tasks, but two groups did not differ significantly on syllabic unit tasks. There was an interaction effect between unit size(syllables and phonemes) and group (bilinguals and monolinguals). Third, there were correlations for both bilingual and monolingual children between overall phonological awareness skills and word recognition skills.

  • PDF

Acoustic Characteristics of 'Short Rushes of Speech' using Alternate Motion Rates in Patients with Parkinson's Disease (파킨슨병 환자의 교대운동속도 과제에서 관찰된 '말 뭉침'의 음향학적 특성)

  • Kim, Sun Woo;Yoon, Ji Hye;Lee, Seung Jin
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.55-62
    • /
    • 2015
  • It is widely accepted that Parkinson's disease(PD) is the most common cause of hypokinetic dysarthria, and its characteristics of 'short rushes of speech' have become more evident along with the severity of motor disorders. Speech alternate motion rates (AMRs) are particularly useful for observing not only rate abnormalities but also deviant speech. However, relatively little is known about the characteristics of 'short rushes of speech' in terms of AMRs of PD except for the perceptual characteristics. The purpose of this study was to examine which acoustic features of 'short rushes of speech' in terms of AMRs are a robust indicator of Parkinsonian speech. Numbers of syllabic repetitions (/pə/, /tə/, /kə/) in AMR tasks were analyzed through acoustic methods observing a spectrogram of the Computerized Speech Lab in 9 patients with PD. Acoustically, we found three characteristics of 'short rushes of speech': 1) Vocalized consonants without closure duration(VC) 76.3%; 2) No consonant segmentation(NC) 18.6%; 3) No vowel formant frequency(NV) 5.1%. Based on these results, 'short rushes of speech' may affect the failure to reach and maintain the phonatory targets. In order to best achieve the therapeutic goals, and to make the treatment most efficacious, it is important to incorporate training methods which are based on both phonation and articulation.

Automatic Music Transcription System Using SIDE (SIDE를 이용한 자동 음악 채보 시스템)

  • Hyoung, A-Young;Lee, Joon-Whoan
    • The KIPS Transactions:PartB
    • /
    • v.16B no.2
    • /
    • pp.141-150
    • /
    • 2009
  • This paper proposes a system that can automatically write singing voices to music notes. First, the system uses Stabilized Diffusion Equation(SIDE) to divide the song to a series of syllabic parts based on pitch detection. By the song segmentation, our method can recognize the sound length of each fragment through clustering based on genetic algorithm. Moreover, this study introduces a concept called 'Relative Interval' so as to recognize interval based on pitch of singer. And it also adopted measure extraction algorithm using pause data to implement the higher precision of song transcription. By the experiments using 16 nursery songs, it is shown that the measure recognition rate is 91.5% and DMOS score reaches 3.82. These findings demonstrate effectiveness of system performance.