• Title/Summary/Keyword: Speech Onset

Search Result 194, Processing Time 0.027 seconds

Voice onset time in English and Korean stops with respect to a sound change

  • Kim, Mi-Ryoung
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.9-17
    • /
    • 2021
  • Voice onset time (VOT) is known to be a primary acoustic cue that differentiates voiced from voiceless stops in the world's languages. While much attention has been given to the sound change of Korean stops, little attention has been given to that of English stops. This study examines VOT of stop consonants as produced by English speakers in comparison to Korean speakers to see whether there is any VOT change for English stops and how the effects of stop, place, gender, and individual on VOT differ cross-linguistically. A total of 24 native speakers (11 Americans and 13 Koreans) participated in this experiment. The results showed that, for Korean, the VOT merger of lax and aspirated stops was replicated, and, for English, voiced stops became initially devoiced and voiceless stops became heavily aspirated. English voiceless stops became longer in VOT than Korean counterparts. The results suggest that, similar to Korean stops, English stops may also undergo a sound change. Since it is the first study to be revealed, more convincing evidence is necessary.

The Effects of Syllable Boundary Ambiguity on Spoken Word Recognition in Korean Continuous Speech

  • Kang, Jinwon;Kim, Sunmi;Nam, Kichun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.11
    • /
    • pp.2800-2812
    • /
    • 2012
  • The purpose of this study was to examine the syllable-word boundary misalignment cost on word segmentation in Korean continuous speech. Previous studies have demonstrated the important role of syllabification in speech segmentation. The current study investigated whether the resyllabification process affects word recognition in Korean continuous speech. In Experiment I, under the misalignment condition, participants were presented with stimuli in which a word-final consonant became the onset of the next syllable. (e.g., /k/ in belsak ingan becomes the onset of the first syllable of ingan 'human'). In the alignment condition, they heard stimuli in which a word-final vowel was also the final segment of the syllable (e.g., /eo/ in heulmeo ingan is the end of both the syllable and word). The results showed that word recognition was faster and more accurate in the alignment condition. Experiment II aimed to confirm that the results of Experiment I were attributable to the resyllabification process, by comparing only the target words from each condition. The results of Experiment II supported the findings of Experiment I. Therefore, based on the current study, we confirmed that Korean, a syllable-timed language, has a misalignment cost of resyllabification.

Relationship between executive function and cue weighting in Korean stop perception across different dialects and ages

  • Kong, Eun Jong;Lee, Hyunjung
    • Phonetics and Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.21-29
    • /
    • 2021
  • The present study investigated how one's cognitive resources are related to speech perception by examining Korean speakers' executive function (EF) capacity and its association with voice onset time (VOT) and f0 sensitivity in identifying Korean stop laryngeal categories (/t'/ vs. /t/ vs. /th/). Previously, Kong et al. (under revision) reported that Korean listeners (N = 154) in Seoul and Changwon (Gyeongsang) showed differential group patterns in dialect-specific cue weightings across educational institutions (college, high school, and elementary school). We follow up this study by further relating their EF control (working memory, mental flexibility, and inhibition) to their speech perception patterns to examine whether better cognitive ability would control attention to multiple acoustic dimensions. Partial correlation analyses revealed that better EFs in Korean listeners were associated with greater sensitivity to available acoustic details and with greater suppression of irrelevant acoustic information across subgroups, although only a small set of EF components turned out to be relevant. Unlike Seoul participants, Gyeongsang listeners' f0 use was not correlated with any EF task scores, reflecting dialect-specific cue primacy using f0 as a secondary cue. The findings confirm the link between speech perception and general cognitive ability, providing experimental evidence from Korean listeners.

2.4kbps Speech Coding Algorithm Using the Sinusoidal Model (정현파 모델을 이용한 2.4kbps 음성부호화 알고리즘)

  • 백성기;배건성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.3A
    • /
    • pp.196-204
    • /
    • 2002
  • The Sinusoidal Transform Coding(STC) is a vocoding scheme based on a sinusoidal model of a speech signal. The low bit-rate speech coding based on sinusoidal model is a method that models and synthesizes speech with fundamental frequency and its harmonic elements, spectral envelope and phase in the frequency region. In this paper, we propose the 2.4kbps low-rate speech coding algorithm using the sinusoidal model of a speech signal. In the proposed coder, the pitch frequency is estimated by choosing the frequency that makes least mean squared error between synthetic speech with all spectrum peaks and speech synthesized with chosen frequency and its harmonics. The spectral envelope is estimated using SEEVOC(Spectral Envelope Estimation VOCoder) algorithm and the discrete all-pole model. The phase information is obtained using the time of pitch pulse occurrence, i.e., the onset time, as well as the phase of the vocal tract system. Experimental results show that the synthetic speech preserves both the formant and phase information of the original speech very well. The performance of the coder has been evaluated in terms of the MOS test based on informal listening tests, and it achieved over the MOS score of 3.1.

Word-final Coda Acquisition by English-Speaking Childrea with Cochlear Implants

  • Kim, Jung-Sun
    • Phonetics and Speech Sciences
    • /
    • v.3 no.4
    • /
    • pp.23-31
    • /
    • 2011
  • This paper examines the production patterns of the acquisition of coda consonants in monosyllabic words in English-speaking children with cochlear implants. The data come from the transcribed speech of children with cochlear implants. This study poses three questions. First, do children with cochlear implants acquire onset consonants earlier than codas? Second, do children's productions have a bimoraic-sized constraint that maintains binary feet? Third, what patterns emerge from production of coda consonants? The results revealed that children with cochlear implants acquire onset consonants earlier than codas. With regard to the bimoraic-sized constraints, the productions of vowel type (i.e., monomoraic and bimoraic) were more accurate for monomoraic vowels than bimoraic ones for some children with cochlear implants, although accuracy in vowel productions showed high proportion regardless of vowel types. The variations of coda production exhibited individual differences. Some children produced less sonorant consonants with high frequency and others produced more sonorant ones. The results of this study were similar to those pertaining to children with normal hearing. In the process of coda consonant acquisition, the error patterns of prosody-sensitive production may be regarded as articulatory challenges to produce higher-level prosodic structures.

  • PDF

Phonetic Contrasts of One-syllable Words and Speech Intelligibility in Hearing-impaired Adults (청각장애 성인의 일음절 낱말대조 명료도 특성)

  • Do Yeonji;Kim Soojin
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.121-124
    • /
    • 2003
  • The purpose of this study was to show the characteristics of phonetic contrasts of one-syllable words and speech intelligibility in hearing-impaired adults. Seven subjects with hearing-impaired participated in this experiment(2 males, 5 females). The test materials are 77 pairs of one-syllable words with phonetic contrasts. The results of this study were as follows: (1) The average score of intelligibility(scored accuracy) was the highest in contrasts of onset feature. (2) The scored percentages of error(except for combinations of contrasts) were the highest in articulatory manner contrasts of onset, tongue height contrasts of nucleus, and articulatory place contrasts of coda, respectively.

  • PDF

Detection and Synthesis of Transition Parts of The Speech Signal

  • Kim, Moo-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.3C
    • /
    • pp.234-239
    • /
    • 2008
  • For the efficient coding and transmission, the speech signal can be classified into three distinctive classes: voiced, unvoiced, and transition classes. At low bit rate coding below 4 kbit/s, conventional sinusoidal transform coders synthesize speech of high quality for the purely voiced and unvoiced classes, whereas not for the transition class. The transition class including plosive sound and abrupt voiced-onset has the lack of periodicity, thus it is often classified and synthesized as the unvoiced class. In this paper, the efficient algorithm for the transition class detection is proposed, which demonstrates superior detection performance not only for clean speech but for noisy speech. For the detected transition frame, phase information is transmitted instead of magnitude information for speech synthesis. From the listening test, it was shown that the proposed algorithm produces better speech quality than the conventional one.

Durational Interaction of Stops and Vowels in English and Korean Child-Directed Speech

  • Choi, Han-Sook
    • Phonetics and Speech Sciences
    • /
    • v.4 no.2
    • /
    • pp.61-70
    • /
    • 2012
  • The current study observes the durational interaction of tautosyllabic consonants and vowels in the word-initial position of English and Korean child-directed speech (CDS). The effect of phonological laryngeal contrasts in stops on the following vowel duration, and the effect of the intrinsic vowel duration on the release duration of preceding stops in addition to the acoustic realization of the contrastive segments are explored in different prosodic contexts - phrase-initial/medial, focal accented/non-focused - in a marked speech style of CDS. A trade-off relationship between Voice Onset Time (VOT), as consonant release duration, and voicing phonation time, as vowel duration, reported from adult-to-adult speech, and patterns of durational variability are investigated in CDS of two languages with different linguistic rhythms, under systematically controlled prosodic contexts. Speech data were collected from four native English mothers and four native Korean mothers who were talking to their one-word staged infants. In addition to the acoustic measurements, the transformed delta measure is employed as a variability index of individual tokens. Results confirm the durational correlation between prevocalic consonants and following vowels. The interaction is revealed in a compensatory pattern such as longer VOTs followed by shorter vowel durations in both languages. An asymmetry is found in CV interaction in that the effect of consonant on vowel duration is greater than the VOT differences induced by the vowel. Prosodic effects are found such that the acoustic difference is enhanced between the contrastive segments under focal accent, supporting the paradigmatic strengthening effect. Positional variation, however, does not show any systematic effects on the variations of the measured acoustic quantities. Overall vowel duration and syllable duration are longer in English tokens but involve less variability across the prosodic variations. The constancy of syllable duration, therefore, is not found to be more strongly sustained in Korean CDS. The stylistic variation is discussed in relation to the listener under linguistic development in CDS.

The Correlation between Speech Intelligibility and Acoustic Measurements in Children with Speech Sound Disorders (말소리장애 아동의 말명료도와 음향학적 측정치 간 상관관계)

  • Kang, Eunyeong
    • Journal of The Korean Society of Integrative Medicine
    • /
    • v.6 no.4
    • /
    • pp.191-206
    • /
    • 2018
  • Purpose : This study investigated the correlation between speech intelligibility and acoustic measurements of speech sounds produced by the children with speech sound disorders and children without any diagnosed speech sound disorder. Methods : A total of 60 children with and without speech sound disorders were the subjects of this study. Speech samples were obtained by having the subjects? speak meaningful words. Acoustic measurements were analyzed on a spectrogram using the Multi-speech 3700 program. Speech intelligibility was determined according to a listener's perceptual judgment. Results : Children with speech sound disorders had significantly lower speech intelligibility than those without speech sound disorders. The intensity of the vowel /u/, the duration of the vowel /${\omega}$/, and the second formant of the vowel /${\omega}$/ were significantly different between both groups. There was no difference in voice onset time between the groups. There was a correlation between acoustic measurements and speech intelligibility. Conclusion : The results of this study showed that the speech intelligibility of children with speech sound disorders was affected by intensity, word duration, and formant frequency. It is necessary to complement clinical setting results using acoustic measurements in addition to evaluation of speech intelligibility.

Intonational Pattern Frequency of Seoul Korean and Its Implication to Word Segmentation

  • Kim, Sa-Hyang
    • Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.21-30
    • /
    • 2008
  • The current study investigated distributional properties of the Korean Accentual Phrase and their implication to word segmentation. The properties examined were the frequency of various AP tonal patterns, the types of tonal patterns that are imposed upon content words, and the average number and temporal location of content words within the AP. A total of 414 sentences from the Read speech corpus and the Radio corpus were used for the data analysis. The results showed that the 84% of the APs contained one content word, and that almost 90% of the content words are located in AP-initial position. When the AP-initial onset was not an aspirated or tense consonant, the most common AP patterns were LH, LHH, and LHLH (78%), and 88% of the multisyllabic content words start with a rising tone in AP-initial position. When the AP-initial onset was an aspirated or tense consonant, the most common AP patterns were HH, HHLH, and HHL (72%), and 74% of the multisyllabic content words start with a level H tone in AP-initial position. The data further showed that 84.1% of APs end with the final H tone. The findings provide valuable information about the prosodic pattern and structure of Korean APs, and account for the results of a previous study which showed that Korean listeners are sensitive to AP-initial rising and AP-final high tones (Kim, 2007). This is in line with other cross-linguistic research which has revealed the correlation between prosodic probability and speech processing strategy.

  • PDF