• Title/Summary/Keyword: Intelligibility

Search Result 309, Processing Time 0.022 seconds

Effects of Listener's Experience, Severity of Speaker's Articulation, and Linguistic Cues on Speech Intelligibility in Congenitally Deafened Adults with Cochlear Implants (청자의 경험, 화자의 조음 중증도, 단서 유형이 인공와우이식 선천성 농 성인의 말명료도에 미치는 영향)

  • Lee, Young-Mee;Sung, Jee-Eun;Park, Jeong-Mi;Sim, Hyun-Sub
    • Phonetics and Speech Sciences
    • /
    • v.3 no.1
    • /
    • pp.125-134
    • /
    • 2011
  • The current study investigated the effects of experience of deaf speech, severity of speaker's articulation, and linguistic cues on speech intelligibility of congenitally deafened adults with cochlear implants. Speech intelligibility was judged by 28 experienced listeners and 40 inexperienced listeners using a word transcription task. A three-way (2 $\times$ 2 $\times$ 4) mixed design was used with the experience of deaf speech (experienced/inexperienced listener) as a between-subject factor, the severity of speaker's articulation (mild to moderate/moderate to severe), and linguistic cues (no/phonetic/semantic/combined) as within-subject factors. The dependent measure was the number of correctly transcribed words. Results revealed that three main effects were statistically significant. Experienced listeners showed better performance on the transcription than inexperienced listeners, and listeners were better in transcribing speakers who were mild to moderate than moderate to severe. There were significant differences in speech intelligibility among the four different types of cues, showing that the combined cues provided the greatest enhancement of the intelligibility scores (combined > semantic > phonological > no). Three two-way interactions were statistically significant, indicating that the type of cues and severity of speakers differentiated experienced listeners from inexperienced listeners. The current results suggested that the use of a combination of linguistic cues increased the speech intelligibility of congenitally deafened adults with cochlear implants, and the experience of deaf speech was critical especially in evaluating speech intelligibility of severe speakers compared to that of mild speakers.

  • PDF

A Study on the Intelligibility of Esophageal Speech (식도발성 발화의 명료도에 대한 연구)

  • Pyo, Hwa-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.5
    • /
    • pp.182-187
    • /
    • 2007
  • The present study was to investigate the speech intelligibility of esophageal speech, which is the way that the laryngectomized people who lost their voices by total laryngectomy can phonate by using the airstream driven into esophagus, not trachea. Three normal listeners transcribed the CVVand VCV syllables produced by 10 esophageal speakers. As a result, overall intelligibility of esophageal speech was 27%. Affricates showed the highest intelligibility, and fricatives, the lowest. In the aspect of the place of articulation, palatals were the most intelligble, and alveolars, the least. Most of the aspirated consonants showed a low intelligibility. The consonants in VCV syllables were more intelligible than the ones in CVV syllables. The low intelligibility of esophageal speakers is due to insufficient airflow intake into esophagus. Therefore, training to increase airflow intake, as well as correct articulation training, will improve their low intelligibility.

The Lombard effect on the speech of children with intellectual disability (지적장애 아동의 롬바드 효과에 따른 말산출 특성)

  • Lee, Hyunju;Lee, Jiyun;Kim, Yukyung
    • Phonetics and Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.115-122
    • /
    • 2016
  • This study investigates the acoustic-phonetic features and speech intelligibility of Lombard speech in children with intellectual disability, by examining the effect of Lombard speech at 3 levels of non-noise, 55dB, and 65dB. Eight children with intellectual disability read sentences and played speaking games, and their speech were analyzed in terms of intensity, pitch, vowel space of /a/, /i/, and /u/, VAI(3), articulation rate and speech intelligibility. Results showed, first, that intensity and pitch increased as noise level increased; second, that VAI(3) increased as the noise level increased; third, that articulation rate decreased as noise intensity increased; finally, that speech intelligibility increased as noise intensity increased. The Lombard speech changed the VAI(3), vowel space, articulation rate, speech intelligibility of the children with intellectual disability as well. This study suggests that the Lombard speech will be clinically useful for the persons who have intellectual disability and difficulties in self-control.

Performance Enhancement of Speech Intelligibility in Communication System Using Combined Beamforming (directional microphone) and Speech Filtering Method (방향성 마이크로폰과 음성 필터링을 이용한 통신 시스템의 음성 인지도 향상)

  • Shin, Min-Cheol;Wang, Se-Myung
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2005.05a
    • /
    • pp.334-337
    • /
    • 2005
  • The speech intelligibility is one of the most important factors in communication system. The speech intelligibility is related with speech to noise ratio. To enhance the speech to noise ratio, background noise reduction techniques are being developed. As a part of solution to noise reduction, this paper introduces directional microphone using beamforming method and speech filtering method. The directional microphone narrows the spatial range of processing signal into the direction of the target speech signal. The noise signal located in the same direction with speech still remains in the processing signal. To sort this mixed signal into speech and noise, as a following step, a speech-filtering method is applied to pick up only the speech signal from the processed signal. The speech filtering method is based on the characteristics of speech signal itself. The combined directional microphone and speech filtering method gives enhanced performance to speech intelligibility in communication system.

  • PDF

Speech Intelligibility Analysis on the Vibration Sound of the Window Glass of a Conference Room (회의실 유리창 진동음의 명료도 분석)

  • Kim, Yoon-Ho;Kim, Hee-Dong;Kim, Seock-Hyun
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2006.11a
    • /
    • pp.150-155
    • /
    • 2006
  • Speech intelligibility is investigated on a conference room-window glass coupled system. Using MLS(Maximum Length Sequency) signal as a sound source, acceleration and velocity responses of the window glass are measured by accelerometer and laser doppler vibrometer. MTF(Modulation Transfer Function) is used to identify the speech transmission characteristics of the room and window system. STI(Speech Transmission Index) is calculated by using MTF and speech intelligibility of the room and the window glass is estimated. Speech intelligibilities by the acceleration signal and the velocity signal are compared and the possibility of the wiretapping is investigated. Finally, intelligibility of the conversation sound is examined by the subjective test.

  • PDF

The Effect of the Disturbing Wave on the Speech Intelligibility of the Eavesdropping Sound of a Window Glass (교란파가 유리창 진동음의 음성명료도에 미치는 영향)

  • Kim, Seock-Hyun;Kim, Hee-Dong;Heo, Wook
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.17 no.9
    • /
    • pp.888-894
    • /
    • 2007
  • The speech sound is detected by the vibration measurement of the window glass. In this study, we investigate the effect of the disturbing waves by background noise and window shaker excitation on the speech intelligibility of the detected sound. Based upon Modulation Transfer Function(MTF), speech intelligibility of the sound is objectively estimated by Speech Transmission Index(STI) As the level of the disturbing wave varies, variation of the speech intelligibility is examined. Experimental result reveals how STI is influenced by the level and frequency characteristics of the disturbing wave. By using a customized window shaker for disturbing sound, we evaluate the efficiency and the frequency characteristics of the anti-eavesdropping system. The purpose of the study is to provide useful information to prevent the eavesdropping through the window glass.

Binary Mask Criteria Based on Distortion Constraints Induced by a Gain Function for Speech Enhancement

  • Kim, Gibak
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.4
    • /
    • pp.197-202
    • /
    • 2013
  • Large gains in speech intelligibility can be obtained using the SNR-based binary mask approach. This approach retains the time-frequency (T-F) units of the mixture signal, where the target signal is stronger than the interference noise (masker) (e.g., SNR > 0 dB), and removes the T-F units, where the interfering noise is dominant. This paper introduces two alternative binary masks based on the distortion constraints to improve the speech intelligibility. The distortion constraints are induced by a gain function for estimating the short-time spectral amplitude. One binary mask is designed to retain the speech underestimated (T-F) units while removing the speech overestimated (T-F)units. The other binary mask is designed to retain the noise overestimated (T-F) units while removing noise underestimated (T-F) units. Listening tests with oracle binary masks were conducted to assess the potential of the two binary masks in improving the intelligibility. The results suggested that the two binary masks based on distortion constraints can provide large gains in intelligibility when applied to noise-corrupted speech.

  • PDF

Articulation Characteristics of Preschool Children in the Bilingual Environment (학령전 이중언어 환경 아동의 조음특성)

  • Kwon, Mi-Ji;Park, Sang-Hee;Seok, Dong-Il
    • Speech Sciences
    • /
    • v.14 no.2
    • /
    • pp.73-87
    • /
    • 2007
  • The aim of this study was to examine the articulation characteristics of preschool children in the bilingual or monolingual environment. Subjects included 23 children of 4 to 6 years old in the bilingual environment, and 19 children of monolingual environment. Their speech was evaluated in terms of articulation correctness and intelligibility by the author and a speech therapist. Results showed as the following: First, there were some significant differences between bilingual and monolingual children in the percentage of consonants correctly articulated. But there was no significant difference between their language environment or ages in the percentage of vowels correctly articulated. Second, there were some significant differences between the bilingual and monolingual children in the intelligibility of word articulation. Also, there were some significant differences between the two language groups in the sentence intelligibility. There was a high positive correlation between the word and sentence intelligibility.

  • PDF

Non-Intrusive Speech Intelligibility Estimation Using Autoencoder Features with Background Noise Information

  • Jeong, Yue Ri;Choi, Seung Ho
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.3
    • /
    • pp.220-225
    • /
    • 2020
  • This paper investigates the non-intrusive speech intelligibility estimation method in noise environments when the bottleneck feature of autoencoder is used as an input to a neural network. The bottleneck feature-based method has the problem of severe performance degradation when the noise environment is changed. In order to overcome this problem, we propose a novel non-intrusive speech intelligibility estimation method that adds the noise environment information along with bottleneck feature to the input of long short-term memory (LSTM) neural network whose output is a short-time objective intelligence (STOI) score that is a standard tool for measuring intrusive speech intelligibility with reference speech signals. From the experiments in various noise environments, the proposed method showed improved performance when the noise environment is same. In particular, the performance was significant improved compared to that of the conventional methods in different environments. Therefore, we can conclude that the method proposed in this paper can be successfully used for estimating non-intrusive speech intelligibility in various noise environments.

Effects of Phonetic Complexity and Articulatory Severity on Percentage of Correct Consonant and Speech Intelligibility in Adults with Dysarthria (조음복잡성 및 조음중증도에 따른 마비말장애인의 자음정확도와 말명료도)

  • Song, HanNae;Lee, Youngmee;Sim, HyunSub;Sung, JeeEun
    • Phonetics and Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.39-46
    • /
    • 2013
  • This study examined the effects of phonetic complexity and articulatory severity on Percentage of Correct Consonant (PCC) and speech intelligibility in adults with dysarthria. Speech samples of thirty-two words from APAC (Assessment of Phonology and Articulation of Children) were collected from 38 dysarthric speakers with one of two different levels of articulatory severities (mild or mild-moderate). A PCC and speech intelligibility score was calculated by the 4 levels of phonetic complexity. Two-way mixed ANOVA analysis revealed: (1) the group with mild severity showed significantly higher PCC and speech intelligibility scores than the mild-moderate articulatory severity group, (2) PCC at the phonetic complexity level 4 was significantly lower than those at the other levels and (3) an interaction effect of articulatory severity and phonetic complexity was observed only on the PCC. Pearson correlation analysis demonstrated the degree of correlation between PCC and speech intelligibility varied depending on the level of articulatory severity and phonetic complexity. The clinical implications of the findings were discussed.