• 제목/요약/키워드: inter-speaker variability

검색결과 9건 처리시간 0.023초

F-ratio of Speaker Variability in Emotional Speech

  • Yi, So-Pae
    • 음성과학
    • /
    • 제15권1호
    • /
    • pp.63-72
    • /
    • 2008
  • Various acoustic features were extracted and analyzed to estimate the inter- and intra-speaker variability of emotional speech. Tokens of vowel /a/ from sentences spoken with different modes of emotion (sadness, neutral, happiness, fear and anger) were analyzed. All of the acoustic features (fundamental frequency, spectral slope, HNR, H1-A1 and formant frequency) indicated greater contribution to inter- than intra-speaker variability across all emotions. Each acoustic feature of speech signal showed a different degree of contribution to speaker discrimination in different emotional modes. Sadness and neutral indicated greater speaker discrimination than other emotional modes (happiness, fear, anger in descending order of F-ratio). In other words, the speaker specificity was better represented in sadness and neutral than in happiness, fear and anger with any of the acoustic features.

  • PDF

Inter-speaker and intra-speaker variability on sound change in contemporary Korean

  • Kim, Mi-Ryoung
    • 말소리와 음성과학
    • /
    • 제9권3호
    • /
    • pp.25-32
    • /
    • 2017
  • Besides their effect on the f0 contour of the following vowel, Korean stops are undergoing a sound change in which a partial or complete consonantal merger on voice onset time (VOT) is taking place between aspirated and lax stops. Many previous studies on sound change have mainly focused on group-normative effects, that is, effects that are representative of the population as a whole. Few systematic quantitative studies of change in adult individuals have been carried out. The current study examines whether the sound change holds for individual speakers. It focuses on inter-speaker and intra-speaker variability on sound change in contemporary Korean. Speech data were collected for thirteen Seoul Korean speakers studying abroad in America. In order to minimize the possible effects of speech production, socio-phonetic factors such as age, gender, dialect, speech rate, and L2 exposure period were controlled when recruiting participants. The results showed that, for nine out of thirteen speakers, the consonantal merger is taking place between the aspirated and lax stop in terms of VOT. There were also intra-speaker variations on the merger in three aspects: First, is the consonantal (VOT) merger between the two stops is in progress or not? Second, are VOTs for aspirated stops getting shorter or not (i.e., the aspirated-shortening process)? Third, are VOTs for lax stops getting longer or not (i.e., the lax-lengthening process)? The results of remarkable inter-speaker and intra-speaker variability indicate a synchronous speech sound change of the stop system in contemporary Korean. Some speakers are early adopters or active propagators of sound change whereas others are not. Further study is necessary to see whether the inter-speaker differences exceed intra-speaker differences in sound change.

음향 파라미터에 의한 정서적 음성의 음질 분석 (Analysis of the Voice Quality in Emotional Speech Using Acoustical Parameters)

  • 조철우;리타오
    • 대한음성학회지:말소리
    • /
    • 제55권
    • /
    • pp.119-130
    • /
    • 2005
  • The aim of this paper is to investigate some acoustical characteristics of the voice quality features from the emotional speech database. Six different parameters are measured and compared for 6 different emotions (normal, happiness, sadness, fear, anger, boredom) and from 6 different speakers. Inter-speaker variability and intra-speaker variability are measured. Some intra-speaker consistency of the parameter change across the emotions are observed, but inter-speaker consistency are not observed.

  • PDF

An EMG Study of the Feature 'Tensity'

  • Kim, Dae-Won
    • 대한후두음성언어의학회지
    • /
    • 제5권1호
    • /
    • pp.22-28
    • /
    • 1994
  • Previous studies reveal that in English there is no EMG evidence fur the feature tense-lax distinction. The technique of electro-myography(EMG) was used to see if the existing claim holds true, particularly in unstressed syllable. It was found that in unstressed syllable, the peak EMG amplitude from the orbicularis oris superior muscle was significantly greater in /p/ than in /b/, while in stressed syllable this difference was negligible. It was hypothesized that in stressed syllable, /p/ and /b/ may be differentiated by the EMG activities from a muscle other than the orbicularis oris superior muscle, e.g. the respiratory muscles relating to 'aspiration' or depressor anguli oris muscle. In Korean, there was a clear labial gestures for the feature tense-lax distinction. The phoneme-sensitive manifestation of stress and some possible reasons for the inter-speaker variability in the data and the variability within a given speaker were discussed.

  • PDF

모음길이 비율에 따른 발화속도 보상을 이용한 한국어 음성인식 성능향상 (An Improvement of Korean Speech Recognition Using a Compensation of the Speaking Rate by the Ratio of a Vowel length)

  • 박준배;김태준;최성용;이정현
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 컴퓨터소사이어티 추계학술대회논문집
    • /
    • pp.195-198
    • /
    • 2003
  • The accuracy of automatic speech recognition system depends on the presence of background noise and speaker variability such as sex, intonation of speech, and speaking rate. Specially, the speaking rate of both inter-speaker and intra-speaker is a serious cause of mis-recognition. In this paper, we propose the compensation method of the speaking rate by the ratio of each vowel's length in a phrase. First the number of feature vectors in a phrase is estimated by the information of speaking rate. Second, the estimated number of feature vectors is assigned to each syllable of the phrase according to the ratio of its vowel length. Finally, the process of feature vector extraction is operated by the number that assigned to each syllable in the phrase. As a result the accuracy of automatic speech recognition was improved using the proposed compensation method of the speaking rate.

  • PDF

Visual Presentation of Connected Speech Test (CST)

  • Jeong, Ok-Ran;Lee, Sang-Heun;Cho, Tae-Hwan
    • 음성과학
    • /
    • 제3권
    • /
    • pp.26-37
    • /
    • 1998
  • The Connected Speech Test (CST) was developed to test hearing aid performance using realistic stimuli (Connected speech) presented in a background of noise with a visible speaker. The CST has not been investigated as a measure of speech reading ability using the visual portion of the CST only. Thirty subjects were administered the 48 test lists of the CST using visual presentation mode only. Statistically significant differences were found between the 48 test lists and between the 12 passages of the CST (48 passages divided into 12 groups of 4 lists which were averaged.). No significant differences were found between male and female subjects; however, in all but one case, females scored better than males. No significant differences were found between students in communication disorders and students in other departments. Intra- and inter-subject variability across test lists and passages was high. Suggestions for further research include changing the scoring of the CST to be more contextually based and changing the speaker for the CST.

  • PDF

Normalized gestural overlap measures and spatial properties of lingual movements in Korean non-assimilating contexts

  • Son, Minjung
    • 말소리와 음성과학
    • /
    • 제11권3호
    • /
    • pp.31-38
    • /
    • 2019
  • The current electromagnetic articulography study analyzes several articulatory measures and examines whether, and if so, how they are interconnected, with a focus on cluster types and an additional consideration of speech rates and morphosyntactic contexts. Using articulatory data on non-assimilating contexts from three Seoul-Korean speakers, we examine how speaker-dependent gestural overlap between C1 and C2 in a low vowel context (/a/-to-/a/) and their resulting intergestural coordination are realized. Examining three C1C2 sequences (/k(#)t/, /k(#)p/, and /p(#)t/), we found that three normalized gestural overlap measures (movement onset lag, constriction onset lag, and constriction plateau lag) were correlated with one another for all speakers. Limiting the scope of analysis to C1 velar stop (/k(#)t/ and /k(#)p/), the results are recapitulated as follows. First, for two speakers (K1 and K3), i) longer normalized constriction plateau lags (i.e., less gestural overlap) were observed in the pre-/t/ context, compared to the pre-/p/ (/k(#)t/>/k(#)p/), ii) the tongue dorsum at the constriction offset of C1 in the pre-/t/ contexts was more anterior, and iii) these two variables are correlated. Second, the three speakers consistently showed greater horizontal distance between the vertical tongue dorsum and the vertical tongue tip position in /k(#)t/ sequences when it was measured at the time of constriction onset of C2 (/k(#)t/>/k(#)p/): the tongue tip completed its constriction onset by extending further forward in the pre-/t/ contexts than the uncontrolled tongue tip articulator in the pre-/p/ contexts (/k(#)t/>/k(#)p/). Finally, most speakers demonstrated less variability in the horizontal distance of the lingual-lingual sequences, which were taken as the active articulators (/k(#)t/=/k(#)p/ for K1; /k(#)t/

Korean Broadcast News Transcription Using Morpheme-based Recognition Units

  • Kwon, Oh-Wook;Alex Waibel
    • The Journal of the Acoustical Society of Korea
    • /
    • 제21권1E호
    • /
    • pp.3-11
    • /
    • 2002
  • Broadcast news transcription is one of the hardest tasks in speech recognition because broadcast speech signals have much variability in speech quality, channel and background conditions. We developed a Korean broadcast news speech recognizer. We used a morpheme-based dictionary and a language model to reduce the out-of·vocabulary (OOV) rate. We concatenated the original morpheme pairs of short length or high frequency in order to reduce insertion and deletion errors due to short morphemes. We used a lexicon with multiple pronunciations to reflect inter-morpheme pronunciation variations without severe modification of the search tree. By using the merged morpheme as recognition units, we achieved the OOV rate of 1.7% comparable to European languages with 64k vocabulary. We implemented a hidden Markov model-based recognizer with vocal tract length normalization and online speaker adaptation by maximum likelihood linear regression. Experimental results showed that the recognizer yielded 21.8% morpheme error rate for anchor speech and 31.6% for mostly noisy reporter speech.

Stress Effects on Korean Vowels with Reference to Rhythm

  • 윤일승
    • 대한음성학회지:말소리
    • /
    • 제67호
    • /
    • pp.1-16
    • /
    • 2008
  • Stress effects upon Korean vowels were investigated with reference to rhythm. We measured three acoustic correlates (Duration: VOT, Vowel Duration; F0; Intensity) of stress from the seven pairs of stressed vs. unstressed Korean vowels /i, ${\varepsilon}(e)$, a, o, u, i, e/. The results of the experiment revealed that stress gave only inconsistent and weak effects on duration, which supports that Korean is not a stress-timed language as far as strong stress effects on duration are still considered crucial in stress-timing. On the other hand, Korean stressed vowels were most characterized with higher F0 and next with stronger intensity. But speakers generally showed tactics to reversely use F0 and intensity in stressing an utterance rather than proportionately strengthening both of the two acoustic correlates of stress. There was found great inter-speaker variability especially in the variations of duration.

  • PDF