• Title/Summary/Keyword: Phonetics

Search Result 948, Processing Time 0.026 seconds

The Relationship Between Speech Intelligibility and Comprehensibility for Children with Cochlear Implants (조음중증도에 따른 인공와우이식 아동들의 말명료도와 이해가능도의 상관연구)

  • Heo, Hyun-Sook;Ha, Seung-Hee
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.171-178
    • /
    • 2010
  • This study examined the relationship between speech intelligibility and comprehensibility for hearing impaired children with cochlear implants. Speech intelligibility was measured by orthographic transcription method for acoustic signal at the level of words and sentences. Comprehensibility was evaluated by examining listener's ability to answer questions about the contents of a narrative. Speech samples were collected from 12 speakers(age of 6~15 years) with cochlear implants. For each speaker, 4 different listeners(total of 48 listeners) completed 2 tasks: One task involved making orthographic transcriptions and the other task involved answering comprehension questions. The results of the study were as follows: (1) Speech intelligibility and comprehensibility scores tended to be increased by decreasing of severity. (2) Across all speakers, the relationship was significant between speech intelligibility and comprehensibility scores without considering severity. However, within severity groups, there was the significant relationship between comprehensibility and speech intelligibility only for moderate-severe group. These results suggest that speech intelligibility scores measured by orthographic transcription may not accurately reflect how well listener comprehend speech of children with cochlear implants and therefore, measures of both speech intelligibility and listener comprehension should be considered in evaluating speech ability and information-bearing capability in speakers with cochlear implants.

  • PDF

The Phonatory Characteristics of the Profound Hearing-Impaired Adults' Voice: with Reference to F0, Intensity, and their Perturbations (심도 청각장애 성인의 발성 특성: 강도, 음도, 및 그 변동율을 중심으로)

  • Choi, Eun-Ah;Park, Han-Sang;Seong, Cheol-Jae
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.177-185
    • /
    • 2009
  • This study investigates the differences in mean F0, intensity, jitter and shimmer across hearing aid, gender and vowels. For this study, 20 hearing-impaired adults and 20 normal hearing adults as a control group were asked to read 7 Korean vowels(/$\alpha$, $\Lambda$, o, u, ɯ, i, $\varepsilon$/). Subjects' readings were recorded by NasalView and analyzed by Praat. Results showed that the means of F0 were significantly higher in the hearing impaired group(HL) than in the normal hearing group(NH), in the female group than in male group, and in high vowels than in low vowels. Second, intensity was significantly higher in the normal hearing group(NH) than in the hearing impaired group(HL), in male group than in female group, and in low vowels than in high vowels. Third, jitter was significantly higher in the normal hearing group(NH) than in the hearing impaired group(HL), and in female group than in male group and in the back vowels than in front vowels. Finally, shimmer was significantly higher in the normal hearing group(NH) than in the hearing impaired group(HL), and in male group than in female group. In particular, the male group showed that front vowels tend to have higher shimmer than back vowels.

  • PDF

A Study of the Effects of Vowels on the Production of English Labials /p, b, f, v/ by Korean Learners of English (영어학습자의 순음 /p, b, f, v/ 발성에 미치는 모음의 영향 연구)

  • Koo, Hee-San
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.23-27
    • /
    • 2010
  • The purpose of this study was to find how English vowels /a, e, i, o, u/ affect the production of English labials /p, b, f, v/ by Korean learners of English. Sixty syllables were composed by five vowels and four labials in the syllable types CV, VC, and VCV. The nonsense syllables were produced three times by nine subjects. The major results show that (1) in inter-vocalic position, the subjects had higher scores in producing /v/ composed with /a, e, o/ and /u/, while subjects had lower scores in producing /p/ with /i/ and /o/, (2) in post-vocalic position, the subjects had higher scores in producing /v/ and /f/ with /a, e/, and /o/, while subjects had lower scores in producing /b/ with /e/ and /i/, and (3) in pre-vocalic position, the subjects had higher scores in producing /v/ with /e, o, u/ and /f/ with /u/, while subjects had lower scores in producing /b/ with /e/, /i/ and /u/. The results suggest that on the whole, Korean learners of English have much difficulty in producing /p/ with /i/ in inter-vocalic condition and /b/ with /i, /e/ in pre-vocalic position.

  • PDF

An Acoustic Study of Korean and English Voiceless Sibilant Fricatives

  • Sung, Eun-Kyung;Cho, Yun-Jeong
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.37-46
    • /
    • 2010
  • This study investigates acoustic characteristics of English and Korean voiceless sibilant fricatives as they appear before the three vowels, /i/, /$\alpha$/ and /u/. Three measurements - duration, center of gravity and major spectral peak - are employed to compare acoustic properties and vowel effect for each fricative sound. This study also investigates the question of whether Korean sibilant fricatives are acoustically similar to the English voiceless alveolar fricative /s/ or to the palato-alveolar /$\int$/. The results show that in the duration of frication noise, English /$\int$/ is the longest and Korean lax /s/ the shortest of the four sounds. It is also observed that English alveolar /s/ has the highest value, whereas Korean /s/ shows the lowest value in the frequency of center of gravity. In terms of major spectral peak, while English /s/ reveals the highest frequency, English /$\int$/ shows the lowest value. In addition, evidence indicates that there is a strong vowel effect in the fricative sounds of both languages, although the vowel effect patterns of the two languages are inconsistent. For instance, in the major spectral peak, both Korean lax /s/ and tense /$s^*$/ show significantly higher frequencies before the vowel /$\alpha$/ than before the other vowels, whereas both English /s/ and /$\int$/ exhibit significantly higher frequencies before the vowel /i/ than before the other vowels. These results indicate that Korean sibilant fricatives are acoustically distinct from both English /s/ and /$\int$/.

  • PDF

Compensation in VC and Word

  • Yun, Il-Sung
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.81-89
    • /
    • 2010
  • Korean and three other languages (English, Arabic, and Japanese) were compared with regard to the compensatory movements in a VC (Vowel and Consonant) sequence and word. For this, Korean data were collected from an experiment and the other languages' data from literature. All the test words of the languages had the same syllabic contexture, i.e., /CVCV(r)/, where C was an oral stop and intervocalic consonants were either bilabial or alveolar stops. The present study found that (1) Korean is most striking in the durational variations of segments (vowel and the following hetero-syllabic consonant); (2) unlike the three languages that show a constant sum of VC, Korean yields a three-way distinction in the length of VC according the type (lax unaspirated vs. tense unaspirated vs. tense aspirated) of the following stop consonant; (3) a durational constancy is maintained up to the word level in the three languages, but Korean word duration varies as a function of the feature tenseness of the intervocalic consonants; (4) consonant duration is proven to differentiate Korean the most from the other languages. It is suggested that the durational difference between a lax consonant and its tense cognate(s) and the degree of compensation between V and C are determined by the phonology in each language.

  • PDF

Effective Combination of Temporal Information and Linear Transformation of Feature Vector in Speaker Verification (화자확인에서 특징벡터의 순시 정보와 선형 변환의 효과적인 적용)

  • Seo, Chang-Woo;Zhao, Mei-Hua;Lim, Young-Hwan;Jeon, Sung-Chae
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.127-132
    • /
    • 2009
  • The feature vectors which are used in conventional speaker recognition (SR) systems may have many correlations between their neighbors. To improve the performance of the SR, many researchers adopted linear transformation method like principal component analysis (PCA). In general, the linear transformation of the feature vectors is based on concatenated form of the static features and their dynamic features. However, the linear transformation which based on both the static features and their dynamic features is more complex than that based on the static features alone due to the high order of the features. To overcome these problems, we propose an efficient method that applies linear transformation and temporal information of the features to reduce complexity and improve the performance in speaker verification (SV). The proposed method first performs a linear transformation by PCA coefficients. The delta parameters for temporal information are then obtained from the transformed features. The proposed method only requires 1/4 in the size of the covariance matrix compared with adding the static and their dynamic features for PCA coefficients. Also, the delta parameters are extracted from the linearly transformed features after the reduction of dimension in the static features. Compared with the PCA and conventional methods in terms of equal error rate (EER) in SV, the proposed method shows better performance while requiring less storage space and complexity.

  • PDF

A Study on the Declination According to Length of Utterance, Clause Boundary and Focus in Korean (한국어의 발화 길이 및 절 경계와 초점에 의한 점진하강(declination) 연구)

  • Kwak, Sook-Young
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.11-22
    • /
    • 2010
  • The present study attempts to investigate declination in Korean and its relevant aspects to the length of utterance, the clause boundary, and focus. More specifically, I examine the relation of declination with the length of utterance, the declination reset at the clause boundary, and the effect of focus on declination. Results showed that the length of utterance had no relation with the first and last pitch values of the utterance but that they were consistent regardless of the length of utterance. However, the declination slope changed to be relatively gentle from the fourth accentual phrase to the end of the whole intonational phrase. There was a reset of declination in such a way that the first pitch in the second phrase was always lower than that of the first phrase, but the first pitch in the third phrase was not always lower than that of the second phrase when the whole utterance was composed of three phrases. Finally, the pitch values of the focusing words decreased as their position went back in a sentence. One declination line was formed in the case of focused utterance, but in the case of an utterance that contained a clause boundary, a new declination line was formed at the start of each new clause. These findings can be applied to developing a Korean speech synthesizer that contains natural prosody; they can be also utilized for teaching Korean prosody.

  • PDF

Intonation Patterns of Korean Spontaneous Speech (한국어 자유 발화 음성의 억양 패턴)

  • Kim, Sun-Hee
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.85-94
    • /
    • 2009
  • This paper investigates the intonation patterns of Korean spontaneous speech through an analysis of four dialogues in the domain of travel planning. The speech corpus, which is a subset of spontaneous speech database recorded and distributed by ETRI, is labeled in APs and IPs based on K-ToBI system using Momel, an intonation stylization algorithm. It was found that unlike in English, a significant number of APs and IPs include hesitation lengthening, which is known to be a disfluency phenomenon due to speech planning. This paper also claims that the hesitation lengthening is different from the IP-final lengthening and that it should be categorized as a new category, as it greatly affects the intonation patterns of the language. Except for the fact that 19.09% of APs show hesitation lengthening, the spontaneous speech shows the same AP patterns as in read speech with higher frequency of falling patterns such as LHL in comparison with read speech which show more LH and LHLH patterns. The IP boundary tones of spontaneous speech, showing the same five patterns such as L%, HL%, LHL%, H%, LH% as in read speech, show higher frequency of rising patterns (H% and LH%) and contour tones (HL%, LH%, LHL%) while read speech on the contrary shows higher frequency of falling patterns and simple tones at the end of IPs.

  • PDF

Acoustic Analysis with Moving Window in Normal and Pathologic Voices

  • Choi, Seong-Hee;Lee, Ji-Yeoun;Jiang, Jack J.
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.165-170
    • /
    • 2010
  • In this study, the most stable portion was identified using 5% moving window during /a/ sustained phonation in normal and pathologic voice signals and the perturbation values were compared between normal and pathologic voices at the mid-point and at the most stable portion using moving window, respectively. The results revealed that some severe pathologic voice signals can be eligible for perturbation analysis by identifying the most stable portion with Err less than 10. In addition, the perturbation acoustic parameters did not differentiate the pathologic voice signals from the normal voice signals when the mid-point was selected to measure the perturbation analysis(p>0.05). However, significantly higher %shimmer and lower SNR values were observed in pathologic voices (p<0.05) when the most stable portion was selected by moving window. In conclusion, moving window could identify the most stable portion objectively which can allow toget the minimum perturbation values (%jitter, %shimmer) and maximum SNR values. Thus, moving window technique can be applicable for more reliable and accurate perturbation acoustic analysis.

  • PDF

The Effects of Misalignment between Syllable and Word Onsets on Word Recognition in English (음절의 시작과 단어 시작의 불일치가 영어 단어 인지에 미치는 영향)

  • Kim, Sun-Mi;Nam, Ki-Chun
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.61-71
    • /
    • 2009
  • This study aims to investigate whether the misalignment between syllable and word onsets due to the process of resyllabification affects Korean-English late bilinguals perceiving English continuous speech. Two word-spotting experiments were conducted. In Experiment 1, misalignment conditions (resyllabified conditions) were created by adding CVC contexts at the beginning of vowel-initial words and alignment conditions (non-resyllabified conditions) were made by putting the same CVC contexts at the beginning of consonant-initial words. The results of Experiment 1 showed that detections of targets in alignment conditions were faster and more correct than in misalignment conditions. Experiment 2 was conducted in order to avoid any possibilities that the results of Experiment 1 were due to consonant-initial words being easier to recognize than vowel-initial words. For this reason, all the experimental stimuli of Experiment 2 were vowel-initial words preceded by CVC contexts or CV contexts. Experiment 2 also showed misalignment cost when recognizing words in resyllabified conditions. These results indicate that Korean listeners are influenced by misalignment between syllable and word onsets triggered by a resyllabification process when recognizing words in English connected speech.

  • PDF