• Title/Summary/Keyword: prosody evaluation

Search Result 25, Processing Time 0.022 seconds

A Study on the Perceptual Aspects of an Emotional Voice Using Prosody Transplantation (운율이식을 통해 나타난 감정인지 양상 연구)

  • Yi, So-Pae
    • MALSORI
    • /
    • no.62
    • /
    • pp.19-32
    • /
    • 2007
  • This study investigated the perception of emotional voices by transplanting some or all of the prosodic aspects, i.e. pitch, duration, and intensity, of the utterances produced with emotional voices onto those with normal voices and vice versa. Listening evaluation by 24 raters revealed that prosodic effect was greater than segmental & vocal quality effect on the preception of the emotion. The degree of influence of prosody and that of segments & vocal quality varied according to the type of emotion. As for fear, prosodic elements had far greater influence than segmental & vocal quality elements whereas segmental and vocal elements had as much effect as prosody on the perception of happy voices. Different amount of contribution to the perception of emotion was found among prosodic features with the descending order of pitch, duration and intensity. As for the length of the utterances, the perception of emotion was more effective with long utterances than with short utterances.

  • PDF

A Study of Decision Tree Modeling for Predicting the Prosody of Corpus-based Korean Text-To-Speech Synthesis (한국어 음성합성기의 운율 예측을 위한 의사결정트리 모델에 관한 연구)

  • Kang, Sun-Mee;Kwon, Oh-Il
    • Speech Sciences
    • /
    • v.14 no.2
    • /
    • pp.91-103
    • /
    • 2007
  • The purpose of this paper is to develop a model enabling to predict the prosody of Korean text-to-speech synthesis using the CART and SKES algorithms. CART prefers a prediction variable in many instances. Therefore, a partition method by F-Test was applied to CART which had reduced the number of instances by grouping phonemes. Furthermore, the quality of the text-to-speech synthesis was evaluated after applying the SKES algorithm to the same data size. For the evaluation, MOS tests were performed on 30 men and women in their twenties. Results showed that the synthesized speech was improved in a more clear and natural manner by applying the SKES algorithm.

  • PDF

A study of the prosodic patterns of autism and normal children in the imitating declarative and interrogative sentences (따라말하기 과제를 통한 자폐범주성 장애 아동과 일반 아동의 평서문과 의문문의 음향학적 특성 비교)

  • Lee, Jinhyung;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.39-49
    • /
    • 2020
  • The prosody of children with autism spectrum disorders (ASD) has several abnormal features, including monotonous speech. The purpose of this study was to compare acoustic features between an ASD group and a typically developing (TD) group and within the ASD group. The study also examined audience perceptions of the lengthening effect of increasing the number of syllables. 50 participants were divided into two groups (20 with ASD and 30 TD), and they were asked to imitate a total of 28 sentences. In the auditory-perceptual evaluation, seven participants chose sentence types in 115 sentences. Pitch, intensity, speech rate, and pitch slope were used to analyze the significant differences. In conclusion, the ASD group showed higher pitch and intensity and a lower overall speaking rate than the TD group. Moreover, there were significant differences in s2 slope of interrogative sentences. Finally, based on the auditory-perceptual evaluation, only 4.3% of interrogative sentences produced by participants with ASD were perceived as declarative sentences. The cause of this abnormal prosody has not been clearly identified; however, pragmatic ability and other characteristics of autism are related to ASD prosody. This study identified prosodic ASD patterns and suggested the need to develop treatments to improve prosody.

Comparison of prosodic characteristics by question type in left- and right-hemisphere-injured stroke patients (좌반구 손상과 우반구 손상 뇌졸중 환자의 의문문 유형에 따른 운율 특성 비교)

  • Yu, Youngmi;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.1-13
    • /
    • 2021
  • This study examined the characteristics of linguistic prosody in terms of cerebral lateralization in three groups of 9 healthy speakers and 14 speakers with a history of stroke (7 with left hemisphere damage (LHD), 7 with right hemisphere damage (RHD)). Specifically, prosodic characteristics related to speech rate, duration, pitch, and intensity were examined in three types of interrogative sentences (wh-questions, yes-no questions, alternative questions) with auditory perceptual evaluation. As a result, the statistically significant key variables showed flaws in production of the linguistic prosody in the speakers with LHD. The statistically significant variables were more insufficiently produced for wh-questions than for yes-no and alternative questions. This trend was particularly noticeable in variables related to pitch and speech rate. This result suggests that when Korean speakers process linguistic prosody, such as that of lexico-semantic and syntactic information in interrogative sentences, the left hemisphere seems to be superior to the right hemisphere.

Characteristics of Right Hemispheric Damaged Patients in Korean Focused Prosodic Sentences (한국어 초점 발화 시 우반구 손상인의 초점 운율 특성)

  • Lee, Myung Soon;Park, Hyun
    • Therapeutic Science for Rehabilitation
    • /
    • v.8 no.3
    • /
    • pp.69-81
    • /
    • 2019
  • Objective: The purpose of this study was to examine the characteristics of prosody of ambiguous sentences in patients with right hemisphere damage(RHD). Methods: Sentences with each word prosodically focused were used to investigate. Several acoustic parameters such as intensity, F0, and duration were measured to identify characteristics of prosody in patients with lesions in the right hemisphere and normal controls. All speech samples were recorded using the Praat 4.3.14 software. Data were analyzed with the independent sample t-test using SPSS 18.0. Results: The results of this study are as follows: First, intensity of the first syllable of the focus word was different between the two groups in several sentences. Second, F0 was different between the two groups in all sentences. Third, duration was different between the groups in several sentences. Accordingly, prosody were varied and values of acoustic parameters differed due to the focus of utterance. The group with right hemisphere damage showed restricted prosody. Conclusions: Intensity, duration, and F0 are all used as elements of prosody in emphasizing structural and pragmatic meaning, but according to the focus, strength and duration were related to F0. In contrast, F0 has a significant linguistic difference, but there was a significant difference between the RHD and normal people, so F0 can be a discriminatory factor of rhyme evaluation of the right hemisphere damaged and it is necessary to accumulate more strong evidence through future research.

Variational autoencoder for prosody-based speaker recognition

  • Starlet Ben Alex;Leena Mary
    • ETRI Journal
    • /
    • v.45 no.4
    • /
    • pp.678-689
    • /
    • 2023
  • This paper describes a novel end-to-end deep generative model-based speaker recognition system using prosodic features. The usefulness of variational autoencoders (VAE) in learning the speaker-specific prosody representations for the speaker recognition task is examined herein for the first time. The speech signal is first automatically segmented into syllable-like units using vowel onset points (VOP) and energy valleys. Prosodic features, such as the dynamics of duration, energy, and fundamental frequency (F0), are then extracted at the syllable level and used to train/adapt a speaker-dependent VAE from a universal VAE. The initial comparative studies on VAEs and traditional autoencoders (AE) suggest that the former can efficiently learn speaker representations. Investigations on the impact of gender information in speaker recognition also point out that gender-dependent impostor banks lead to higher accuracies. Finally, the evaluation on the NIST SRE 2010 dataset demonstrates the usefulness of the proposed approach for speaker recognition.

Evaluation of Teaching English Intonation through Native Utterances with Exaggerated Intonation (억양이 과장된 원어민 발화를 통한 영어 억양 교육과 평가)

  • Yoon, Kyu-Chul
    • Phonetics and Speech Sciences
    • /
    • v.3 no.1
    • /
    • pp.35-43
    • /
    • 2011
  • The purpose of this paper is to evaluate the viability of employing the intonation exaggeration technique proposed in [4] in teaching English prosody to university students. Fifty-six female university students, twenty-two in a control group and the other thirty-four in an experimental group, participated in a teaching experiment as part of their regular coursework for a five-and-a-half week period. For the study material of the experimental group, a set of utterances was synthesized whose intonation contours had been exaggerated whereas the control group was given the same set without any intonation modification. Recordings from both before and after the teaching experiment were made and one sentence set was chosen for analysis. The parameters analyzed were the pitch range, words containing the highest and lowest pitch points, and the 3-dimensional comparison of the three prosodic features [2]. An AXB and subjective rating test were also performed along with a qualitative screening of the individual intonation contours. The results showed that the experimental group performed slightly better in that their intonation contour was more similar to that of the model native speaker's utterance. This appears to suggest that the intonation exaggeration technique can be employed in teaching English prosody to students.

  • PDF

Perceptual Evaluation of Duration Models in Spoken Korean

  • Chung, Hyun-Song
    • Speech Sciences
    • /
    • v.9 no.1
    • /
    • pp.207-215
    • /
    • 2002
  • Perceptual evaluation of duration models of spoken Korean was carried out based on the Classification and Regression Tree (CART) model for text-to-speech conversion. A reference set of durations was produced by a commercial text-to-speech synthesis system for comparison. The duration model which was built in the previous research (Chung & Huckvale, 2001) was applied to a Korean language speech synthesis diphone database, 'Hanmal (HN 1.0)'. The synthetic speech produced by the CART duration model was preferred in the subjective preference test by a small margin and the synthetic speech from the commercial system was superior in the clarity test. In the course of preparing the experiment, a labeled database of spoken Korean with 670 sentences was constructed. As a result of the experiment, a trained duration model for speech synthesis was obtained. The 'Hanmal' diphone database for Korean speech synthesis was also developed as a by-product of the perceptual evaluation.

  • PDF

A study on the Suprasegmental Parameters Exerting an Effect on the Judgment of Goodness or Badness on Korean-spoken English (한국인 영어 발음의 좋음과 나쁨 인지 평가에 영향을 미치는 초분절 매개변수 연구)

  • Kang, Seok-Han;Rhee, Seok-Chae
    • Phonetics and Speech Sciences
    • /
    • v.3 no.2
    • /
    • pp.3-10
    • /
    • 2011
  • This study investigates the role of suprasegmental features with respect to the intelligibility of Korean-spoken English judged by Korean and English raters as being good or bad. It has been hypothesized that Korean raters would have different evaluations from English native raters and that the effect may vary depending on the types of suprasegmental factors. Four Korean and four English native raters, respectively, took part in the evaluation of 14 Korean subjects' English speaking. The subjects read a given paragraph. The results show that the evaluation for 'intelligibility' is different for the two groups and that the difference comes from their perception of L2 English suprasegmentals.

  • PDF

Recognition of Emotion and Emotional Speech Based on Prosodic Processing

  • Kim, Sung-Ill
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3E
    • /
    • pp.85-90
    • /
    • 2004
  • This paper presents two kinds of new approaches, one of which is concerned with recognition of emotional speech such as anger, happiness, normal, sadness, or surprise. The other is concerned with emotion recognition in speech. For the proposed speech recognition system handling human speech with emotional states, total nine kinds of prosodic features were first extracted and then given to prosodic identifier. In evaluation, the recognition results on emotional speech showed that the rates using proposed method increased more greatly than the existing speech recognizer. For recognition of emotion, on the other hands, four kinds of prosodic parameters such as pitch, energy, and their derivatives were proposed, that were then trained by discrete duration continuous hidden Markov models(DDCHMM) for recognition. In this approach, the emotional models were adapted by specific speaker's speech, using maximum a posteriori(MAP) estimation. In evaluation, the recognition results on emotional states showed that the rates on the vocal emotions gradually increased with an increase of adaptation sample number.