• Title/Summary/Keyword: Korean Standard Speech Database

Search Result 16, Processing Time 0.021 seconds

Pitch Contour Conversion Using Slanted Gaussian Normalization Based on Accentual Phrases

  • Lee, Ki-Young;Bae, Myung-Jin;Lee, Ho-Young;Kim, Jong-Kuk
    • Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.31-42
    • /
    • 2004
  • This paper presents methods using Gaussian normalization for converting pitch contours based on prosodic phrases along with experimental tests on the Korean database of 16 declarative sentences and the first sentences of the story of 'The Three Little Pigs'. We propose a new conversion method using Gaussian normalization to the pitch deviation of pitch contour subtracted by partial declination lines: by using partial declination lines for each accentual phrase of pitch contour, we avoid the problem that a Gaussian normalization using average values and standard deviations of intonational phrase tends to lose individual local variability and thus cannot modify individual characteristics of pitch contour from a source speaker to a target speaker. From the results of the experiments, we show that this slanted Gaussian normalization using these declination lines subtracted from pitch contour of accentual phrases can modify pitch contour more accurately than other methods using Gaussian normalization.

  • PDF

Speech Basis Matrix Using Noise Data and NMF-Based Speech Enhancement Scheme (잡음 데이터를 활용한 음성 기저 행렬과 NMF 기반 음성 향상 기법)

  • Kwon, Kisoo;Kim, Hyung Young;Kim, Nam Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.4
    • /
    • pp.619-627
    • /
    • 2015
  • This paper presents a speech enhancement method using non-negative matrix factorization (NMF). In the training phase, each basis matrix of source signal is obtained from a proper database, and these basis matrices are utilized for the source separation. In this case, the performance of speech enhancement relies heavily on the basis matrix. The proposed method for which speech basis matrix is made a high reconstruction error for noise signal shows a better performance than the standard NMF which basis matrix is trained independently. For comparison, we propose another method, and evaluate one of previous method. In the experiment result, the performance is evaluated by perceptual evaluation speech quality and signal to distortion ratio, and the proposed method outperformed the other methods.

Perceptions of military personnel towards stuttering and persons who stutter: Using the Public Opinion Survey of Human Attributes-Stuttering (POSHA-S) (직업군인의 말더듬에 대한 인식 연구: Public Opinion Survey of Human Attributes-Stuttering(POSHA-S)를 이용하여)

  • Hwajung Cha;Jin Park
    • Phonetics and Speech Sciences
    • /
    • v.16 no.2
    • /
    • pp.71-81
    • /
    • 2024
  • This study investigated the perceptions of military personnel toward stuttering and persons who stutter (PWS) using the Public Opinion Survey of Human Attributes of -Stuttering (POSHA-S). A total of 67 military personnel participated in the study (male: 58, female: 9, commissioned officers: 11, non-commissioned officers: 56, with an average age of 31.9 years and a standard deviation of 8.7), and the collected data were analyzed according to the guidelines provided by St. Louis. To compare the perceptions of military personnel toward stuttering and PWS, percentile ranks (%iles) relative to the global POSHA-S database, which were constructed from responses from a total of 20,941 participants from various cultural regions, countries, and groups (as of June 2023), were retrieved. Results showed that the overall stuttering score for military personnel was 7, corresponding to the 14 percentile in the POSHA-S database. In addition, the sub-score for ' self-reactions to PWS' was -11 (8 percentile in the POSHA-S database). These results revealed that military personnel hold more negative attitudes toward stuttering and PWS, overall. These findings emphasized the importance of addressing the lack of accurate information among military personnel, suggesting a need for educational programs mainly aimed at improving the understanding of stuttering and PWS within the military.

A realization of pauses in utterance across speech style, gender, and generation (과제, 성별, 세대에 따른 휴지의 실현 양상 연구)

  • Yoo, Doyoung;Shin, Jiyoung
    • Phonetics and Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.33-44
    • /
    • 2019
  • This paper dealt with how realization of pauses in utterance is affected by speech style, gender, and generation. For this purpose, we analyzed the frequency and duration of pauses. Pauses were categorized into four types: pause with breath, pause with no breath, utterance medial pause, and utterance final pause. Forty-eight subjects living in Seoul were chosen from the Korean Standard Speech Database. All subjects engaged in reading and spontaneous speech, through which we could also compare the realization between the two speech styles. The results showed that utterance final pauses had longer durations than utterance medial pauses. It means that utterance final pause has a function that signals the end of an utterance to the audience. For difference between tasks, spontaneous speech had longer and more frequent pauses because of cognitive reasons. With regard to gender variables, women produced shorter and less frequent pauses. For male speakers, the duration of pauses with breath was significantly longer. Finally, for generation variable, older speakers produced more frequent pauses. In addition, the results showed several interaction effects. Male speakers produced longer pauses, but this gender effect was more prominent at the utterance final position.

The Error Pattern Analysis of the HMM-Based Automatic Phoneme Segmentation (HMM기반 자동음소분할기의 음소분할 오류 유형 분석)

  • Kim Min-Je;Lee Jung-Chul;Kim Jong-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.5
    • /
    • pp.213-221
    • /
    • 2006
  • Phone segmentation of speech waveform is especially important for concatenative text to speech synthesis which uses segmented corpora for the construction of synthetic units. because the quality of synthesized speech depends critically on the accuracy of the segmentation. In the beginning. the phone segmentation was manually performed. but it brings the huge effort and the large time delay. HMM-based approaches adopted from automatic speech recognition are most widely used for automatic segmentation in speech synthesis, providing a consistent and accurate phone labeling scheme. Even the HMM-based approach has been successful, it may locate a phone boundary at a different position than expected. In this paper. we categorized adjacent phoneme pairs and analyzed the mismatches between hand-labeled transcriptions and HMM-based labels. Then we described the dominant error patterns that must be improved for the speech synthesis. For the experiment. hand labeled standard Korean speech DB from ETRI was used as a reference DB. Time difference larger than 20ms between hand-labeled phoneme boundary and auto-aligned boundary is treated as an automatic segmentation error. Our experimental results from female speaker revealed that plosive-vowel, affricate-vowel and vowel-liquid pairs showed high accuracies, 99%, 99.5% and 99% respectively. But stop-nasal, stop-liquid and nasal-liquid pairs showed very low accuracies, 45%, 50% and 55%. And these from male speaker revealed similar tendency.

Singing Voice Synthesis Using HMM Based TTS and MusicXML (HMM 기반 TTS와 MusicXML을 이용한 노래음 합성)

  • Khan, Najeeb Ullah;Lee, Jung-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.5
    • /
    • pp.53-63
    • /
    • 2015
  • Singing voice synthesis is the generation of a song using a computer given its lyrics and musical notes. Hidden Markov models (HMM) have been proved to be the models of choice for text to speech synthesis. HMMs have also been used for singing voice synthesis research, however, a huge database is needed for the training of HMMs for singing voice synthesis. And commercially available singing voice synthesis systems which use the piano roll music notation, needs to adopt the easy to read standard music notation which make it suitable for singing learning applications. To overcome this problem, we use a speech database for training context dependent HMMs, to be used for singing voice synthesis. Pitch and duration control methods have been devised to modify the parameters of the HMMs trained on speech, to be used as the synthesis units for the singing voice. This work describes a singing voice synthesis system which uses a MusicXML based music score editor as the front-end interface for entry of the notes and lyrics to be synthesized and a hidden Markov model based text to speech synthesis system as the back-end synthesizer. A perceptual test shows the feasibility of our proposed system.