• Title/Summary/Keyword: speech factors

Search Result 354, Processing Time 0.029 seconds

The Interlanguage Speech Intelligibility Benefit for Listeners (ISIB-L): The Case of English Liquids

  • Lee, Joo-Kyeong;Xue, Xiaojiao
    • Phonetics and Speech Sciences
    • /
    • v.3 no.1
    • /
    • pp.51-65
    • /
    • 2011
  • This study attempts to investigate the interlanguage speech intelligibility benefit for listeners (ISIB-L), examining Chinese talkers' production of English liquids and its perception of native listeners and non-native Chinese and Korean listeners. An Accent Judgment Task was conducted to measure non-native talkers' and listeners' phonological proficiency, and two levels of proficiency groups (high and low) participated in the experiment. The English liquids /l/ and /r/ produced by Chinese talkers were considered in terms of positions (syllable initial and final), contexts (segment, word and sentence) and lexical density (minimal vs. nonminimal pair) to see if these factors play a role in ISIIB-L. Results showed that both matched and mismatched interlanguage speech intelligibility benefit for listeners occurred except for the initial /l/. Non-native Chinese and Korean listeners, though only with high proficiency, were more accurate at identifying initial /r/, final /l/ and final /r/, but initial /l/ was significantly more intelligible to native listeners than non-native listeners. There was evidence of contextual and lexical density effects on ISIB-L. No ISIB-L was demonstrated in sentence context, but both matched and mismatched ISIB-L was observed in word context; this finding held true for only high proficiency listeners. Listeners recognized the targets better in the non-minimal pair (sparse density) environment than the minimal pair (higher density) environment. These findings suggest that ISIB-L for English liquids is influenced by talkers' and listeners' proficiency, syllable position in association with L1 and L2 phonological structure, context, and word neighborhood density.

  • PDF

The Role of Cognitive Control in Tinnitus and Its Relation to Speech-in-Noise Performance

  • Tai, Yihsin;Husain, Fatima T.
    • Journal of Audiology & Otology
    • /
    • v.23 no.1
    • /
    • pp.1-7
    • /
    • 2019
  • Self-reported difficulties in speech-in-noise (SiN) recognition are common among tinnitus patients. Whereas hearing impairment that usually co-occurs with tinnitus can explain such difficulties, recent studies suggest that tinnitus patients with normal hearing sensitivity still show decreased SiN understanding, indicating that SiN difficulties cannot be solely attributed to changes in hearing sensitivity. In fact, cognitive control, which refers to a variety of top-down processes that human beings use to complete their daily tasks, has been shown to be critical for SiN recognition, as well as the key to understand cognitive inefficiencies caused by tinnitus. In this article, we review studies investigating the association between tinnitus and cognitive control using behavioral and brain imaging assessments, as well as those examining the effect of tinnitus on SiN recognition. In addition, three factors that can affect cognitive control in tinnitus patients, including hearing sensitivity, age, and severity of tinnitus, are discussed to elucidate the association among tinnitus, cognitive control, and SiN recognition. Although a possible central or cognitive involvement has always been postulated in the observed SiN impairments in tinnitus patients, there is as yet no direct evidence to underpin this assumption, as few studies have addressed both SiN performance and cognitive control in one tinnitus cohort. Future studies should aim at incorporating SiN tests with various subjective and objective methods that evaluate cognitive performance to better understand the relationship between SiN difficulties and cognitive control in tinnitus patients.

A study on the voiceless plosives from the English and Korean spontaneous speech corpus (영어와 한국어 자연발화 음성 코퍼스에서의 무성 파열음 연구)

  • Yoon, Kyuchul
    • Phonetics and Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.45-53
    • /
    • 2019
  • The purpose of this work was to examine the factors affecting the identities of the voiceless plosives, i.e. English [p, t, k] and Korean [ph, th, kh], from the spontaneous speech corpora. The factors were automatically extracted by a Praat script and the percent correctness of the discriminant analyses was incrementally assessed by increasing the number of factors used in predicting the identities of the plosives. The factors included the spectral moments and tilts of the plosive release bursts, the post-burst aspirations and the vowel onsets, the durations such as the closure durations and the voice onset times (VOTs), the locations within words and utterances and the identities of the following vowels. The results showed that as the number of factors increased up to five, so did the percent correctness of the analyses, resulting in 74.6% for English and 66.4% for Korean. However, the optimal number of factors for the maximum percent correctness was four, i.e. the spectral moments and tilts of the release bursts and the following vowels, the closure durations and the VOTs. This suggests that the identities of the voiceless plosives are mostly determined by their internal and vowel onset cues.

Long-Term Follow-Up Study of Young Adults Treated for Unilateral Complete Cleft Lip, Alveolus, and Palate by a Treatment Protocol Including Two-Stage Palatoplasty: Speech Outcomes

  • Kappen, Isabelle Francisca Petronella Maria;Bittermann, Dirk;Janssen, Laura;Bittermann, Gerhard Koendert Pieter;Boonacker, Chantal;Haverkamp, Sarah;de Wilde, Hester;Van Der Heul, Marise;Specken, Tom FJMC;Koole, Ron;Kon, Moshe;Breugem, Corstiaan Cornelis;van der Molen, Aebele Barber Mink
    • Archives of Plastic Surgery
    • /
    • v.44 no.3
    • /
    • pp.202-209
    • /
    • 2017
  • Background No consensus exists on the optimal treatment protocol for orofacial clefts or the optimal timing of cleft palate closure. This study investigated factors influencing speech outcomes after two-stage palate repair in adults with a non-syndromal complete unilateral cleft lip and palate (UCLP). Methods This was a retrospective analysis of adult patients with a UCLP who underwent two-stage palate closure and were treated at our tertiary cleft centre. Patients ${\geq}17$ years of age were invited for a final speech assessment. Their medical history was obtained from their medical files, and speech outcomes were assessed by a speech pathologist during the follow-up consultation. Results Forty-eight patients were included in the analysis, with a mean age of 21 years (standard deviation, 3.4 years). Their mean age at the time of hard and soft palate closure was 3 years and 8.0 months, respectively. In 40% of the patients, a pharyngoplasty was performed. On a 5-point intelligibility scale, 84.4% received a score of 1 or 2; meaning that their speech was intelligible. We observed a significant correlation between intelligibility scores and the incidence of articulation errors (P<0.001). In total, 36% showed mild to moderate hypernasality during the speech assessment, and 11%-17% of the patients exhibited increased nasalance scores, assessed through nasometry. Conclusions The present study describes long-term speech outcomes after two-stage palatoplasty with hard palate closure at a mean age of 3 years old. We observed moderate long-term intelligibility scores, a relatively high incidence of persistent hypernasality, and a high pharyngoplasty incidence.

Prosodic Features at "Sentence Boundaries" in Oral Presentations

  • Umesaki, Atsuko-Furuta
    • MALSORI
    • /
    • no.41
    • /
    • pp.83-96
    • /
    • 2001
  • It is generally said that falling intonation is used at the end of a declarative sentence. However, this is not the case with all stretches of spontaneous speech which are marked in transcription as sentences. The present paper examines intonation patterns appearing at the end of declarative sentences in oral presentations, and discusses instances where falling intonation does not appear. The texts used for analysis are eight oral presentations collected at international conferences in the field of physics. Quantitative and qualitative analyses are carried out. Three major factors related to discourse structure have been found for non-occurrence of falling intonation at sentence boundaries.

  • PDF

Prosodic Features at "Sentence Boundaries" in Oral Presentations

  • Umesaki, Atsuko-Furuta
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.149-164
    • /
    • 2000
  • It is generally said that falling intonation is used at the end of a declarative sentence. However, this is not the case with all stretches of spontaneous speech which are marked in transcription as sentences. The present paper examines intonation patterns appearing at the end of declarative sentences in oral presentations, and discusses instances where falling intonation does not appear. The texts used for analysis are eight oral presentations collected at international conferences in the field of physics. Quantitative and qualitative analyses are carried out. Three major factors related to discourse structure have been found for nonoccurrence of falling intonation at sentence boundaries.

  • PDF

Design of Emotion Recognition Model Using fuzzy Logic (퍼지 로직을 이용한 감정인식 모델설계)

  • 김이곤;배영철
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.05a
    • /
    • pp.268-282
    • /
    • 2000
  • Speech is one of the most efficient communication media and it includes several kinds of factors about speaker, context emotion and so on. Human emotion is expressed in the speech, the gesture, the physiological phenomena(the breath, the beating of the pulse, etc). In this paper, the method to have cognizance of emotion from anyone's voice signals is presented and simulated by using neuro-fuzzy model.

  • PDF

Design of Emotion Recognition Using Speech Signals (음성신호를 이용한 감정인식 모델설계)

  • 김이곤;김서영;하종필
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2001.10a
    • /
    • pp.265-270
    • /
    • 2001
  • Voice is one of the most efficient communication media and it includes several kinds of factors about speaker, context emotion and so on. Human emotion is expressed in the speech, the gesture, the physiological phenomena(the breath, the beating of the pulse, etc). In this paper, the method to have cognizance of emotion from anyone's voice signals is presented and simulated by using neuro-fuzzy model.

  • PDF

Frame Reliability Weighting for Robust Speech Recognition (프레임 신뢰도 가중에 의한 강인한 음성인식)

  • 조훈영;김락용;오영환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.323-329
    • /
    • 2002
  • This paper proposes a frame reliability weighting method to compensate for a time-selective noise that occurs at random positions of speech signal contaminating certain parts of the speech signal. Speech frames have different degrees of reliability and the reliability is proportional to SNR (signal-to noise ratio). While it is feasible to estimate frame Sl? by using the noise information from non-speech interval under a stationary noisy situation, it is difficult to obtain noise spectrum for a time-selective noise. Therefore, we used statistical models of clean speech for the estimation of the frame reliability. The proposed MFR (model-based frame reliability) approximates frame SNR values using filterbank energy vectors that are obtained by the inverse transformation of input MFCC (mal-frequency cepstral coefficient) vectors and mean vectors of a reference model. Experiments on various burnt noises revealed that the proposed method could represent the frame reliability effectively. We could improve the recognition performance by using MFR values as weighting factors at the likelihood calculation step.

An Efficient Transcoding Algorithm For G.723.1 and EVRC Speech Coders (G.723.1 음성부호화기와 EVRC 음성부호화기의 상호 부호화 알고리듬)

  • 김경태;정성교;윤성완;박영철;윤대희;최용수;강태익
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.5C
    • /
    • pp.548-554
    • /
    • 2003
  • Interoperability is ole the most important factors for a successful integration of the speech network. To accomplish communication between endpoints employing different speech coders, decoder and encoder of each endpoint coder should be placed in tandem. However, tandem coder often produces problems such as poor speech quality, high computational load, and additional transmission delay. In this paper, we propose an efficient transcoding algorithm that can provide interoperability to the networks employing ITU-T G.723.1[1]and TIA IS-127 EVRC[2]speech coders. The proposed transcoding algorithm is composed of four parts: LSP conversion, open-loop pitch conversion, fast adaptive codebook search, and fast fixed codebook search. Subjective and objective quality evaluation confirmed that the speech quality produced by the proposed transcoding algorithm was equivalent to, or better than the tandem coding, while it had shorter processing delay and less computational complexity, which is certified implementing on TMS320C62x.