• Title/Summary/Keyword: Affective prosody

Search Result 6, Processing Time 0.018 seconds

Interaction between emotional content of word and prosody in the evaluation of emotional valence (정서의미 전달에 있어서 운율과 단어 정보의 상호작용.)

  • Choi, Moon-Gee;Nam, Ki-Chun
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.67-70
    • /
    • 2007
  • The present paper focuses on the interaction between lexical-semantic information and affective prosody. The previous studies showed that the influence of lexical-semantic information on the affective evaluation of the prosody was relatively clear, but the influence of emotional prosody on the word evaluation remains still ambiguous. In the present, we explore whether affective prosody influence on the evaluation of affective meaning of a word and vice versa, using more ecological stimulus (sentences) than simple words. We asked participants to evaluate the emotional valence of the sentences which were recorded with affective prosody (negative, neutral, and positive) in Experiment 1 and the emotional valence of their prosodies in Experiment 2. The results showed that the emotional valence of prosody can influence on the emotional evaluation of sentences and vice versa. Interestingly, the positive prosody is likely to be more responsible to this interaction.

  • PDF

What you said vs. how you said it. ('어떻게 말하느냐?' vs. '무엇을 말하느냐?')

  • Choi, Moon-Gee;Nam, Ki-Chun
    • Proceedings of the KSPS conference
    • /
    • 2006.11a
    • /
    • pp.11-13
    • /
    • 2006
  • The present paper focuses on the interaction between lexical-semantic information and affective prosody. More specifically, we explore whether affective prosody influence on evaluation of affective meaning of a word. To this end, we asked participants to listen a word and to evaluate the emotional content of the word which were recoded with affective prosody. Results showed that first, emotional evaluation was slower when the word meaning is negative than when they is positive. Second, when the prosody of words is negative, evaluation time is faster than when it is neutral or positive. And finally, when the affective meaning of word and prosody is congruent, response time is faster than it is incongruent.

  • PDF

A comparison between affective prosodic characteristics observed in children with cochlear implant and normal hearing (인공와우 이식 아동과 정상 청력 아동의 정서적 운율 특성 비교)

  • Oh, Yeong Geon;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.8 no.3
    • /
    • pp.67-78
    • /
    • 2016
  • This study examined the affective prosodic characteristics observed from the children with cochlear implant (CI, hereafter) and normal hearing (NH, hereafter) along with listener's perception on them. Speech samples were acquired from 15 normal and 15 CI children. 8 SLPs(Speech Language Pathologists) perceptually evaluated affective types using Praat's ExperimentMFC. When it comes to the acoustic results, there were statistically meaningful differences between 2 groups in affective types [joy (discriminated by intensity deviation), anger (by intensity-related variables dominantly and duration-related variables partly), and sadness (by all aspects of prosodic variables)]. CI's data are much more louder when expressing joy, louder and slower when expressing anger, and higher, louder, and slower when it comes to sadness than those of NH. The listeners showed much higher correlation when evaluating normal children than CI group(p<.001). Chi-square results revealed that listeners did not show coherence at CI's utterance, but did at those of NH's (CI(p<.01), normal(p=.48)). When CI utterances were discriminated into 3 emotional types by DA(Discriminant Analysis) using 8 acoustic variables, speed related variables such as articulation rate took primary role.

Development and validation of a Korean Affective Voice Database (한국형 감정 음성 데이터베이스 구축을 위한 타당도 연구)

  • Kim, Yeji;Song, Hyesun;Jeon, Yesol;Oh, Yoorim;Lee, Youngmee
    • Phonetics and Speech Sciences
    • /
    • v.14 no.3
    • /
    • pp.77-86
    • /
    • 2022
  • In this study, we reported the validation results of the Korean Affective Voice Database (KAV DB), an affective voice database available for scientific and clinical use, comprising a total of 113 validated affective voice stimuli. The KAV DB includes audio-recordings of two actors (one male and one female), each uttering 10 semantically neutral sentences with the intention to convey six different affective states (happiness, anger, fear, sadness, surprise, and neutral). The database was organized into three separate voice stimulus sets in order to validate the KAV DB. Participants rated the stimuli on six rating scales corresponding to the six targeted affective states by using a 100 horizontal visual analog scale. The KAV DB showed high internal consistency for voice stimuli (Cronbach's α=.847). The database had high sensitivity (mean=82.8%) and specificity (mean=83.8%). The KAV DB is expected to be useful for both academic research and clinical purposes in the field of communication disorders. The KAV DB is available for download at https://kav-db.notion.site/KAV-DB-75 39a36abe2e414ebf4a50d80436b41a.

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2E
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

INTONATION OF TAIWANESE: A COMPARATIVE OF THE INTONATION PATTERNS IN LI, IL, AND L2

  • Chin Chin Tseng
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.574-575
    • /
    • 1996
  • The theme of the current study is to study intonation of Taiwanese(Tw.) by comparing the intonation patterns in native language (Ll), target language (L2), and interlanguage (IL). Studies on interlanguage have dealt primarily with segments. Though there were studies which addressed to the issues of interlanguage intonation, more often than not, they didn't offer evidence for the statement, and the hypotheses were mainly based on impression. Therefore, a formal description of interlanguage intonation is necessary for further development in this field. The basic assumption of this study is that native speakers of one language perceive and produce a second language in ways closely related to the patterns of their first language. Several studies on interlanguage prosody have suggested that prosodic structure and rules are more subject to transfer than certain other phonological phenomena, given their abstract structural nature and generality(Vogel 1991). Broselow(1988) also shows that interlanguage may provide evidence for particular analyses of the native language grammar, which may not be available from the study of the native language alone. Several research questions will be addressed in the current study: A. How does duration vary among native and nominative utterances\ulcorner The results shows that there is a significant difference in duration between the beginning English learners, and the native speakers of American English for all the eleven English sentences. The mean duration shows that the beginning English learners take almost twice as much time (1.70sec.), as Americans (O.97sec.) to produce English sentences. The results also show that American speakers take significant longer time to speak all ten Taiwanese utterances. The mean duration shows that Americans take almost twice as much time (2.24sec.) as adult Taiwanese (1.14sec.) to produce Taiwanese sentences. B. Does proficiency level influence the performance of interlanguage intonation\ulcorner Can native intonation patterns be achieved by a non-native speaker\ulcorner Wenk(1986) considers proficiency level might be a variable which related to the extent of Ll influence. His study showed that beginners do transfer rhythmic features of the Ll and advanced learners can and do succeed in overcoming mother-tongue influence. The current study shows that proficiency level does play a role in the acquisition of English intonation by Taiwanese speakers. The duration and pitch range of the advanced learners are much closer to those of the native American English speakers than the beginners, but even advanced learners still cannot achieve native-like intonation patterns. C. Do Taiwanese have a narrower pitch range in comparison with American English speakers\ulcorner Ross et. al.(1986) suggests that the presence of tone in a language significantly inhibits the unrestricted manipulation of three acoustical measures of prosody which are involved in producing local pitch changes in the fundamental frequency contour during affective signaling. Will the presence of tone in a language inhibit the ability of speakers to modulate intonation\ulcorner The results do show that Taiwanese have a narrower pitch range in comparison with American English speakers. Both advanced (84Hz) and beginning learners (58Hz) of English show a significant narrower FO range than that of Americans' (112Hz), and the difference is greater between the beginning learners' group and native American English speakers.

  • PDF