• Title/Summary/Keyword: fundamental frequency of speech

Search Result 205, Processing Time 0.021 seconds

Phonetic meaning of clarity and turbidity (청탁의 음성학적 의미)

  • Park, Hansang
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.77-89
    • /
    • 2017
  • This study investigates the phonetic meaning of clarity and turbidity(淸濁) that has been used in psychoacoustics, musicology, and linguistics in both the East and the West. With a view to clarifying the phonetic meaning of clarity and turbidity, this study conducts three perception tests. First, 34 subjects were asked to take one of Clear and Turbid by forced choice for 5 pure and complex tones, respectively, ranging from A2 to A6 differing by octave. Second, they were asked to select between the two choices for 25 pure and complex tones, respectively, ranging from A2 to A4 differing by semitone. Third, they were asked to opt for one of the two choices for 8 different vowels of different formant and fundamental frequencies. Results showed that there is a certain range of tone which is perceived as clear, that clarity level increases as fundamental frequency increases, and that pure tones have a higher level of clarity than complex ones, fundamental frequency being equal. Results also showed that vocal tract resonance enhances clarity level on the whole, and that lower vowels have a higher level of clarity than higher ones. This study is significant in that it demonstrates that clarity level is proportional to fundamental frequency and the first formant frequency, all else being equal.

Korean Monophthong Development in Normal 4-, 5-, and 6-Years-Olds (4세, 5세, 6세 정상 아동의 한국어 단모음 발달)

  • Kang, Eunyeong
    • Journal of The Korean Society of Integrative Medicine
    • /
    • v.7 no.4
    • /
    • pp.89-104
    • /
    • 2019
  • Purpose : The purpose of this study was to investigate the development of korean vowels by acoustically analyzing whether children produce Korean vowels differently according to their age and gender between ages 4 and 6. Methods : A total of 104 children aged 4~6 years (56 males and 48 females) participated in this study. The participants were classified as either 4, 5, or 6 years old. Vowel speech data was obtained by asking the subjects to pronounce meaningful words in which the vowel in question was located in the first syllable. Speech analysis was performed using the Multi-speech 3700 program. Results : Age, gender, and vowel being pronounced all had significant effects on intensity. There was significant decrease with increasing age, and the intensity was significantly higher in male children than female children. Neither age, gender, nor the vowel being produced affected the fundamental frequency. The fundamental frequency produced did not differ by age or gender. The first and second formants had considerable effect on age and vowels, significantly decreased with age, and did not have a gender difference. Conclusion : The results of this study showed that children aged 4~6 have similar anatomical structures, but that maturity of speech motor skills required to pronounce vowels was correlated with age. The results of this study can be used to evaluate children's speech and develop speech therapy programs.

A Study on Pitch Perception of Normal Korean (한국 성인 음성의 음도인식에 관한 연구)

  • Jeong, Ok-Ran;Kim, Hyung-Soon;Kim, Young-Tae;Sub, Jang-Su
    • Speech Sciences
    • /
    • v.1
    • /
    • pp.315-323
    • /
    • 1997
  • This study attempts to determine the fundamental frequency level of male and female voices that Koreans perceive as normal. Seventy-three college students majoring in Speech Pathology participated in the study on a voluntary basis. The subjects listened to a male voice with fundamental frequency of 60 Hz, 80 Hz, 100 Hz, 120 Hz, 140 Hz, 160 Hz, 180 Hz, and 200 Hz, and a female voice with fundamental frequency of 140 Hz, 160 Hz, 180 Hz, 200 Hz, 220 Hz, 240 Hz, 260 Hz, and 280 Hz. The PSOLA (Pitch Synchronous Overlap). method and harmonic modeling method of speech signal were used to change pitch in the 20 Hz interval. The voices were presented in a random order to prevent listener bias. The results were as follows; Firstly, $46.6\%$ judged male voice with 120 Hz as normal, and $19.2\%$ judged 140 Hz as normal, and another $19.2\%$ judged 160 Hz as normal. Secondly, $50.7\%$ perceived female voice with 220 Hz as normal, and $32.9\%\;and\;30.1\%$ responded to 200 Hz and 240 Hz, respectively. The problems and recommendations for a future investigation are discussed.

  • PDF

A Study on the Stress Realization of English Homographic Words (영어 동형이의어의 강세실현에 관한 연구)

  • Kim, Ok-Young;Koo, Hee-San
    • Phonetics and Speech Sciences
    • /
    • v.2 no.2
    • /
    • pp.51-60
    • /
    • 2010
  • This study is to examine how Korean speakers realize English stress on the homographic words. Experiments were performed by Korean speakers three times, before stress instruction, immediately after instruction, and six weeks after instruction. First, duration, fundamental frequency, and intensity of the vowel in a stressed syllable of three homographic words produced by Korean speakers were compared with those of native speakers of English. The result shows that when the words were used as nouns, before instruction Korean speakers had shorter duration and lower fundamental frequency in the stressed vowel than the native speakers, which indicates that Korean speakers did not assign the primary stress on the first syllable of the nouns. After instruction, the values of duration and fundamental frequency were increased and the differences between two groups were decreased. Next, the values of these stress features measured three times were analyzed in order to find out how they changed through instruction. The analysis shows that after instruction the values of three features were increased compared to the ones before instruction, and that the biggest change was in duration of the vowel and the next was fundamental frequency. Six weeks after instruction, the values of duration and intensity were decreased than those immediately after instruction. This means that instruction is helpful for Korean speakers to assign the stress for the English homographic words, and that instruction and practice are needed repeatedly.

  • PDF

The fundamental frequency (f0) distribution of American speakers in a spontaneous speech corpus

  • Byunggon Yang
    • Phonetics and Speech Sciences
    • /
    • v.16 no.1
    • /
    • pp.11-16
    • /
    • 2024
  • The fundamental frequency (f0), representing an acoustic measure of vocal fold vibration, serves as an indicator of the speaker's emotional state and language-specific pattern in daily conversations. This study aimed to examine the f0 distribution in an English corpus of spontaneous speech, establishing normative data for American speakers. The corpus involved 40 participants engaging in free discussions on daily activities and personal viewpoints. Using Praat, f0 values were collected filtering outliers after removing nonspeech sounds and interviewer voices. Statistical analyses were performed with R. Results indicated a median f0 value of 145 Hz for all the speakers. The f0 values for all speakers exhibited a right-skewed, pointy distribution within a frequency range of 216 Hz from 75 Hz to 339 Hz. The female f0 range was wider than that of males, with a median of 113 Hz for males and 181 Hz for females. This spontaneous speech corpus provides valuable insights for linguists into f0 variation among individuals or groups in a language. Further research is encouraged to develop analytical and statistical measures for establishing reliable f0 standards for the general population.

Speech Quality of a Sinusoidal Model Depending on the Number of Sinusoids

  • Seo, Jeong-Wook;Kim, Ki-Hong;Seok, Jong-Won;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.17-29
    • /
    • 2000
  • The STC(Sinusoidal Transform Coding) is a vocoding technique that uses a sinusoidal speech model to obtain high- quality speech at low data rate. It models and synthesizes the speech signal with fundamental frequency and its harmonic elements in frequency domain. To reduce the data rate, it is necessary to represent the sinusoidal amplitudes and phases with as small number of peaks as possible while maintaining the speech quality. As a basic research to develop a low-rate speech coding algorithm using the sinusoidal model, in this paper, we investigate the speech quality depending on the number of sinusoids. By varying the number of spectral peaks from 5 to 40 speech signals are reconstructed, and then their qualities are evaluated using spectral envelope distortion measure and MOS(Mean Opinion Score). Two approaches are used to obtain the spectral peaks: one is a conventional STFT (Short-Time Fourier Transform), and the other is a multiresolutional analysis method.

  • PDF

The Acoustic Study on the Voices of Korean Normal Adults (한국 성인의 정상 음성에 관한 기본 음성 측정치 연구)

  • Pyo, H.Y.;Sim, H.S.;Song, Y.K.;Yoon, Y.S.;Lee, E.K.;Lim, S.E.;Hah, H.R.;Choi, H.S.
    • Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.179-192
    • /
    • 2002
  • Our present study was performed to investigate acoustically the Korean normal adults' voices, with enough large number of subjects to be reliable. 120 Korean normal adults (60 males and 60 females) of the age of 20 to 39 years produced sustained three vowels, /a/, /i/, and /u/ and read a part of 'Taking a Walk' paragraph, and by analyzing them acoustically with MDVP of CSL, we could get the fundamental frequency ($F_{0}$), jitter, shimmer and NHR of sustained vowels: speaking fundamental frequency ($SF_{0}$), highest speaking frequency (SFhi), lowest speaking frequency (SFlo) of continuous speech. As results, on the average, male voices showed 118.1$\sim$122.6 Hz in $F_{0}$, 0.467$\sim$0.659% in jitter, 1.538$\sim$2.674% in shimmer, 0.117$\sim$0.114 in NHR, 120.8 Hz in $SF_{0}$, 183.2 Hz in SFhi, 82.6 Hz in SFlo. And, female voices showed 211.6∼220.3 Hz in F0, 0.678∼0.935% in jitter, 1.478∼2.582% in shimmer, 0.098∼0.114 in NHR, 217.1 Hz in $SF_{0}$, 340.9 Hz in SFhi, 136.0 Hz in SFlo. Among the 7 parameters, every parameters except shimmer showed the significant difference between male and female voices. And, when we compared the three vowels, they showed significant differences one another in shimmer and NHR of both genders, but not in $F_{0}$ of males and jitter of females.

  • PDF

The Analysis of Eletroglottographic Measures of Vowel and Sentence in Korean Healthy Adults (한국 정상 성인의 모음과 문단 산출 시 전기성문파형 측정)

  • Kim, Jae-Ock
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.223-228
    • /
    • 2010
  • This study investigated the closed quotient and other voice quality parameters using electroglottography (EGG) in sustaining the vowel /a/ and reading a sentence at the comfortable pitch and loudness in healthy Korean adults. Seventy two healthy adults (36 men, 36 women) aged 20~40 years were included in the study. The tasks were recorded and analyzed using Lx Speech Studio. In vowel sustaining task, closed quotient (Qx), fundamental frequency (Fx), sound pressure level (SPL), Jitter, and Shimmer were measured. In sentence reading task, closed quotient (DQx), fundamental frequency (DFx), and sound pressure level (DAx) were measured. The sex effects were observed on Qx, Fx, Shimmer, DQx, and DFx. Men had significantly higher Qx and DQx than women, but had significantly lower Shimmer than women. However, there was no sex effect on Jitter. The task effects on Qx and SPL as well as DQx and DAx were also assessed. Qx and SPL were significantly higher than DQx and DAx in both gender. This study showed that the closed quotients in both vowel sustaining and sentence reading tasks were significantly related to other voice quality parameters. Therefore, clinicians and researchers should describe the voice quality parameters like fundamental frequency, sound pressure level, Jitter, Shimmer, and so on when reporting closed quotients using EGG.

  • PDF

Executive function and Korean children's stop production

  • Eun Jong Kong;Hyunjung Lee;Jeffrey J. Holliday
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2023
  • Previous studies have established a role for cognitive differences in explaining variability in speech processing across individuals. In the case of perceptual cue weighting in the context of a sound change, studies have produced conflicting results regarding the relationship between executive function and the use of redundant cues. The current study aimed to explore this relationship in acoustic cue weighting during speech production. Forty-one Korean-speaking children read a list of stop-initial words and completed two tests that assess executive function, i.e., Dimensional Change Card Sorting (DCCS) and digit n-back. Voice onset time (VOT) and fundamental frequency (F0) were measured in each word, and analyses were carried out to determine the extent to which children's executive function predicted their use of both informative and less informative cues to the three pairs comprising the Korean three-way stop laryngeal contrast. No evidence was found for a relationship between cognitive ability and acoustic cue weighting in production, which is at odds with previous, albeit conflicting, results for speech perception. While this result may be due to the lack of task demands in the production task used here, it nevertheless expands the empirical ground upon which future work in this area may proceed.

Two Simultaneous Speakers Localization using harmonic structure (하모닉 구조를 이용한 두 명의 동시 발화 화자의 위치 추정)

  • Kim, Hyun-Kyung;Lim, Sung-Kil;Lee, Hyon-Soo
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.121-124
    • /
    • 2005
  • In this paper, we propose a sound localization algorithm for two simultaneous speakers. Because speech is wide-band signal, there are many frequency sub-bands in that two speech sounds are mixed. However, in some sub-bands, one speech sound is more dominant than other sounds. In such sub-bands, dominant speech sounds are little interfered by other speech or noise. In speech sounds, overtones of fundamental frequency have large amplitude, and that are called 'Harmonic structure of speech'. Sub-bands inharmonic structure are more likely dominant. Therefore, the proposed localization algorithm is based on harmonic structure of each speakers. At first, sub-bands that belong to harmonic structure of each speech signal are selected. And then, two speakers are localized using selected sub-bands. The result of simulation shows that localization using selected sub-bands are more efficient and precise than localization methods using all sub-bands.

  • PDF