• 제목/요약/키워드: speech aid

검색결과 104건 처리시간 0.023초

음성 폐쇄상을 이용한 구개열 환자의 언어치료의 증례 보고 - 장착 후 제거까지의 경과 - (USING THE SPEECH AID FOR TREATMENT OF VELOPHARYNGEAL INCOMPETENCY IN INCOMPLETE CLEFT PALATE - A CASE REPORT -)

  • 임대호;윤보근;백진아;신효근
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • 제28권5호
    • /
    • pp.483-488
    • /
    • 2006
  • Velopharyngeal function refers to the combined activity of the soft palate and pharynx in closing and opening the velopharyngeal port to the required degree. In normal speech, various muscles of palate & pharynx function as sphincter and occlude the oropharynx from the nasopharynx during the production of oral consonant sounds. Inadequate velopharyngeal function caused by neurologic disorder - cerebral apoplexy, regressive diseases - disseminated sclerosis, Parkinson's disease, congenital deformity - cleft palate, cerebral palsy and etc. may result in abnormal speech characterized by hypernasality, nasal emission and decreased intelligibility of speech due to weak consonant production. In our study, we constructed speech aids prosthesis - Speech bulb in the incomplete cleft palate VPI patient with hypernasality and assessed velopharyngeal function with nasometer which can evaluate the speech characteristics objectively.

An evaluation of Korean students' pronunciation of an English passage by a speech recognition application and two human raters

  • Yang, Byunggon
    • 말소리와 음성과학
    • /
    • 제12권4호
    • /
    • pp.19-25
    • /
    • 2020
  • This study examined thirty-one Korean students' pronunciation of an English passage using a speech recognition application, Speechnotes, and two Canadian raters' evaluations of their speech according to the International English Language Testing System (IELTS) band criteria to assess the possibility of using the application as a teaching aid for pronunciation education. The results showed that the grand average percentage of correctly recognized words was 77.7%. From the moderate recognition rate, the pronunciation level of the participants was construed as intermediate and higher. The recognition rate varied depending on the composition of the content words and the function words in each given sentence. Frequency counts of unrecognized words by group level and word type revealed the typical pronunciation problems of the participants, including fricatives and nasals. The IELTS bands chosen by the two native raters for the rainbow passage had a moderately high correlation with each other. A moderate correlation was reported between the number of correctly recognized content words and the raters' bands, while an almost a negligible correlation was found between the function words and the raters' bands. From these results, the author concludes that the speech recognition application could constitute a partial aid for diagnosing each individual's or the group's pronunciation problems, but further studies are still needed to match human raters.

음성인식을 이용한 자동 호 분류 철도 예약 시스템 (A Train Ticket Reservation Aid System Using Automated Call Routing Technology Based on Speech Recognition)

  • 심유진;김재인;구명완
    • 대한음성학회지:말소리
    • /
    • 제52호
    • /
    • pp.161-169
    • /
    • 2004
  • This paper describes the automated call routing for train ticket reservation aid system based on speech recognition. We focus on the task of automatically routing telephone calls based on user's fluently spoken response instead of touch tone menus in an interactive voice response system. Vector-based call routing algorithm is investigated and mapping table for key term is suggested. Korail database collected by KT is used for call routing experiment. We evaluate call-classification experiments for transcribed text from Korail database. In case of small training data, an average call routing error reduction rate of 14% is observed when mapping table is used.

  • PDF

Novel Speech Web Architecture Based on Information Selection Agent

  • Kwon, Hyeong-Joon;Kinoshita, Tetsuo
    • International Journal of Advanced Culture Technology
    • /
    • 제1권1호
    • /
    • pp.11-14
    • /
    • 2013
  • In this paper, we propose a prototype of the SpeechWeb application using the information selection agent. We describe its design and implementation method and illustrated the processing results with the aid of some screenshots. Proposed SpeechWeb application presents the associated contents to the user by the aid of dynamic voice-anchors. These contents are presented using the apriori algorithm, which is one of data mining techniques. The application is better than the existing user-initiative structure from the viewpoint of making the user's interesting induction. Moreover, we believe that our proposed application is effective in information retrieval through wired and wireless telephone networks.

  • PDF

시험적 의치형 전기후두의 어음명료도 및 소나그라프 검사 (Speech Intelligibility and Sonagraphic Evaluation of Experimental Model of Obturator-type Electrolarynx)

  • 김기령;홍원표;김광문;심윤주;이승철;김경수;이문재
    • 대한후두음성언어의학회지
    • /
    • 제3권1호
    • /
    • pp.6-12
    • /
    • 1989
  • Methods of voice rehabilitation in laryngectomees include training of esophageal speech, use of electrolarynx and pneumatic speech aid and surgical methods, etc. In this paper, we introduce the experimental model of obturator-type electrolarynx which has several advantages for use such as ease of learning, no disagreeable appearance, and both hands not being occupied. We compared it to normal voice and other voice rehabilitation methods such as esophageal voice, japanese pneumatic speech aid and cervical electrolarynx in intelligibility and sonagraphic evaluation. The results are as follows; 1) Obturator-type electrolarynx exhibited the lowest intelligibility. 2) In sonagraphic evaluation, the spectrogram produced by the obturator-type electrolarynx was the most different from those of normal voice.

  • PDF

Investigating the Effects of Hearing Loss and Hearing Aid Digital Delay on Sound-Induced Flash Illusion

  • Moradi, Vahid;Kheirkhah, Kiana;Farahani, Saeid;Kavianpour, Iman
    • Journal of Audiology & Otology
    • /
    • 제24권4호
    • /
    • pp.174-179
    • /
    • 2020
  • Background and Objectives: The integration of auditory-visual speech information improves speech perception; however, if the auditory system input is disrupted due to hearing loss, auditory and visual inputs cannot be fully integrated. Additionally, temporal coincidence of auditory and visual input is a significantly important factor in integrating the input of these two senses. Time delayed acoustic pathway caused by the signal passing through digital signal processing. Therefore, this study aimed to investigate the effects of hearing loss and hearing aid digital delay circuit on sound-induced flash illusion. Subjects and Methods: A total of 13 adults with normal hearing, 13 with mild to moderate hearing loss, and 13 with moderate to severe hearing loss were enrolled in this study. Subsequently, the sound-induced flash illusion test was conducted, and the results were analyzed. Results: The results showed that hearing aid digital delay and hearing loss had no detrimental effect on sound-induced flash illusion. Conclusions: Transmission velocity and neural transduction rate of the auditory inputs decreased in patients with hearing loss. Hence, the integrating auditory and visual sensory cannot be combined completely. Although the transmission rate of the auditory sense input was approximately normal when the hearing aid was prescribed. Thus, it can be concluded that the processing delay in the hearing aid circuit is insufficient to disrupt the integration of auditory and visual information.

심리음향 특성을 이용한 음성 향상 알고리즘 (A Speech Enhancement Algorithm based on Human Psychoacoustic Property)

  • 전유용;이상민
    • 전기학회논문지
    • /
    • 제59권6호
    • /
    • pp.1120-1125
    • /
    • 2010
  • In the speech system, for example hearing aid as well as speech communication, speech quality is degraded by environmental noise. In this study, to enhance the speech quality which is degraded by environmental speech, we proposed an algorithm to reduce the noise and reinforce the speech. The minima controlled recursive averaging (MCRA) algorithm is used to estimate the noise spectrum and spectral weighting factor is used to reduce the noise. And partial masking effect which is one of the human hearing properties is introduced to reinforce the speech. Then we compared the waveform, spectrogram, Perceptual Evaluation of Speech Quality (PESQ) and segmental Signal to Noise Ratio (segSNR) between original speech, noisy speech, noise reduced speech and enhanced speech by proposed method. As a result, enhanced speech by proposed method is reinforced in high frequency which is degraded by noise, and PESQ, segSNR is enhanced. It means that the speech quality is enhanced.

Auditory Recognition of Digit-in-Noise under Unaided and Aided Conditions in Moderate and Severe Sensorineural Hearing Loss

  • Aghasoleimani, Mina;Jalilvand, Hamid;Mahdavi, Mohammad Ebrahim;Ahmadi, Roghayeh
    • Journal of Audiology & Otology
    • /
    • 제25권2호
    • /
    • pp.72-79
    • /
    • 2021
  • Background and Objectives: The speech-in-noise test is typically performed using an audiometer. The results of the digit-in-noise recognition (DIN) test may be influenced by the flat frequency response of free-field audiometry and frequency of the hearing aid fit based on fitting rationale. This study aims to investigate the DIN test in unaided and aided conditions. Subjects and Methods: Thirty four adults with moderate and severe sensorineural hearing loss (SNHL) participated in the study. The signal-to-noise ratio (SNR) for 50% of the DIN test was obtained in the following two conditions: 1) the unaided condition, performed using an audiometer in a free field; and 2) aided condition, performed using a hearing aid with an unvented individual earmold that was fitted based on NAL-NL2. Results: There was a statistically significant elevation in the mean SNR for the severe SNHL group in both test conditions when compared with that of the moderate SNHL group. In both groups, the SNR for the aided condition was significantly lower than that of the unaided condition. Conclusions: Speech recognition in hearing-impaired patients can be realized by fitting hearing aids based on evidence-based fitting rationale rather than by measuring it using free-field audiometry measurement that is utilized in a routine clinic setup.

Digital enhancement of pronunciation assessment: Automated speech recognition and human raters

  • Miran Kim
    • 말소리와 음성과학
    • /
    • 제15권2호
    • /
    • pp.13-20
    • /
    • 2023
  • This study explores the potential of automated speech recognition (ASR) in assessing English learners' pronunciation. We employed ASR technology, acknowledged for its impartiality and consistent results, to analyze speech audio files, including synthesized speech, both native-like English and Korean-accented English, and speech recordings from a native English speaker. Through this analysis, we establish baseline values for the word error rate (WER). These were then compared with those obtained for human raters in perception experiments that assessed the speech productions of 30 first-year college students before and after taking a pronunciation course. Our sub-group analyses revealed positive training effects for Whisper, an ASR tool, and human raters, and identified distinct human rater strategies in different assessment aspects, such as proficiency, intelligibility, accuracy, and comprehensibility, that were not observed in ASR. Despite such challenges as recognizing accented speech traits, our findings suggest that digital tools such as ASR can streamline the pronunciation assessment process. With ongoing advancements in ASR technology, its potential as not only an assessment aid but also a self-directed learning tool for pronunciation feedback merits further exploration.

인공 신경망을 이용한 보청기용 실시간 환경분류 알고리즘 (Real Time Environmental Classification Algorithm Using Neural Network for Hearing Aids)

  • 서상완;육순현;남경원;한종희;권세윤;홍성화;김동욱;이상민;장동표;김인영
    • 대한의용생체공학회:의공학회지
    • /
    • 제34권1호
    • /
    • pp.8-13
    • /
    • 2013
  • Persons with sensorineural hearing impairment have troubles in hearing at noisy environments because of their deteriorated hearing levels and low-spectral resolution of the auditory system and therefore, they use hearing aids to compensate weakened hearing abilities. Various algorithms for hearing loss compensation and environmental noise reduction have been implemented in the hearing aid; however, the performance of these algorithms vary in accordance with external sound situations and therefore, it is important to tune the operation of the hearing aid appropriately in accordance with a wide variety of sound situations. In this study, a sound classification algorithm that can be applied to the hearing aid was suggested. The proposed algorithm can classify the different types of speech situations into four categories: 1) speech-only, 2) noise-only, 3) speech-in-noise, and 4) music-only. The proposed classification algorithm consists of two sub-parts: a feature extractor and a speech situation classifier. The former extracts seven characteristic features - short time energy and zero crossing rate in the time domain; spectral centroid, spectral flux and spectral roll-off in the frequency domain; mel frequency cepstral coefficients and power values of mel bands - from the recent input signals of two microphones, and the latter classifies the current speech situation. The experimental results showed that the proposed algorithm could classify the kinds of speech situations with an accuracy of over 94.4%. Based on these results, we believe that the proposed algorithm can be applied to the hearing aid to improve speech intelligibility in noisy environments.