• Title/Summary/Keyword: auditory word recognition

Search Result 21, Processing Time 0.023 seconds

The Korean Word Length Effect on Auditory Word Recognition (청각 단어 재인에서 나타난 한국어 단어길이 효과)

  • Choi Wonil;Nam Kichun
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.137-140
    • /
    • 2002
  • This study was conducted to examine the korean word length effects on auditory word recognition. Linguistically, word length can be defined by several sublexical units such as letters, phonemes, syllables, and so on. In order to investigate which units are used in auditory word recognition, lexical decision task was used. Experiment 1 and 2 showed that syllable length affected response time, and syllable length interacted with word frequency. As a result, in recognizing auditory word syllable length was important variable.

  • PDF

The Korean Word Length Effect on AudWord Recognition (청각단어 재인에서 나타난 한국어 단어 길이 효과)

  • Choi Wonil;Nam Kichun
    • MALSORI
    • /
    • no.44
    • /
    • pp.33-46
    • /
    • 2002
  • This study was conducted to examine the effect of word length on auditory word recognition. Word length can be defined by several sublexical units, such as letters, phonemes, syllables, etc. To find out which sublexical units are influential in auditory word recognition, the auditory lexical decision task was used. In Experiment 1, we examined the partial correlation between the speed of reaction time and the number of sublexical units, and in Experiment 2, we executed ANOVA to find out which sublexical length variable was an influential unit. Through these two experiment, we concluded syllable length was the most important variable on auditory word recognition.

  • PDF

Implementation of the Auditory Sense for the Smart Robot: Speaker/Speech Recognition (로봇 시스템에의 적용을 위한 음성 및 화자인식 알고리즘)

  • Jo, Hyun;Kim, Gyeong-Ho;Park, Young-Jin
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2007.05a
    • /
    • pp.1074-1079
    • /
    • 2007
  • We will introduce speech/speaker recognition algorithm for the isolated word. In general case of speaker verification, Gaussian Mixture Model (GMM) is used to model the feature vectors of reference speech signals. On the other hand, Dynamic Time Warping (DTW) based template matching technique was proposed for the isolated word recognition in several years ago. We combine these two different concepts in a single method and then implement in a real time speaker/speech recognition system. Using our proposed method, it is guaranteed that a small number of reference speeches (5 or 6 times training) are enough to make reference model to satisfy 90% of recognition performance.

  • PDF

Isolated-Word Speech Recognition in Telephone Environment Using Perceptual Auditory Characteristic (인지적 청각 특성을 이용한 고립 단어 전화 음성 인식)

  • Choi, Hyung-Ki;Park, Ki-Young;Kim, Chong-Kyo
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.39 no.2
    • /
    • pp.60-65
    • /
    • 2002
  • In this paper, we propose GFCC(gammatone filter frequency cepstrum coefficient) parameter which was based on the auditory characteristic for accomplishing better speech recognition rate. And it is performed the experiment of speech recognition for isolated word acquired from telephone network. For the purpose of comparing GFCC parameter with other parameter, the experiment of speech recognition are carried out using MFCC and LPCC parameter. Also, for each parameter, we are implemented CMS(cepstral mean subtraction)which was applied or not in order to compensate channel distortion in telephone network. Accordingly, we found that the recognition rate using GFCC parameter is better than other parameter in the experimental result.

Phonological awareness skills in terms of visual and auditory stimulus and syllable position in typically developing children (청각적, 시각적 자극제시 방법과 음절위치에 따른 일반아동의 음운인식 능력)

  • Choi, Yu Mi;Ha, Seunghee
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.123-128
    • /
    • 2017
  • This study aims to compare the performance of syllable identification task according to auditory and visual stimuli presentation methods and syllable position. Twenty-two typically developing children (age 4-6) participated in the study. Three-syllable words were used to identify the first syllable and the final syllable in each word with auditory and visual stimuli. For the auditory stimuli presentation, the researcher presented the test word only with oral speech. For the visual stimuli presentation, the test words were presented as a picture, and asked each child to choose appropriate pictures for the task. The results showed that when tasks were presented visually, the performances of phonological awareness were significantly higher than in presenting with auditory stimuli. Also, the performances of the first syllable identification were significantly higher than those of the last syllable identification. When phonological awareness task are presented by auditory stimuli, it is necessary to go through all the steps of the speech production process. Therefore, the phonological awareness performance by auditory stimuli may be low due to the weakness of the other stages in the speech production process. When phonological awareness tasks are presented using visual picture stimuli, it can be performed directly at the phonological representation stage without going through the peripheral auditory processing, phonological recognition, and motor programming. This study suggests that phonological awareness skills can be different depending on the methods of stimulus presentation and syllable position of the tasks. The comparison of performances between visual and auditory stimulus tasks will help identify where children may show weakness and vulnerability in speech production process.

The Design of Keyword Spotting System based on Auditory Phonetical Knowledge-Based Phonetic Value Classification (청음 음성학적 지식에 기반한 음가분류에 의한 핵심어 검출 시스템 구현)

  • Kim, Hack-Jin;Kim, Soon-Hyub
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.169-178
    • /
    • 2003
  • This study outlines two viewpoints the classification of phone likely unit (PLU) which is the foundation of korean large vocabulary speech recognition, and the effectiveness of Chiljongseong (7 Final Consonants) and Paljogseong (8 Final Consonants) of the korean language. The phone likely classifies the phoneme phonetically according to the location of and method of articulation, and about 50 phone-likely units are utilized in korean speech recognition. In this study auditory phonetical knowledge was applied to the classification of phone likely unit to present 45 phone likely unit. The vowels 'ㅔ, ㅐ'were classified as phone-likely of (ee) ; 'ㅒ, ㅖ' as [ye] ; and 'ㅚ, ㅙ, ㅞ' as [we]. Secondly, the Chiljongseong System of the draft for unified spelling system which is currently in use and the Paljongseonggajokyong of Korean script haerye were illustrated. The question on whether the phonetic value on 'ㄷ' and 'ㅅ' among the phonemes used in the final consonant of the korean fan guage is the same has been argued in the academic world for a long time. In this study, the transition stages of Korean consonants were investigated, and Ciljonseeng and Paljongseonggajokyong were utilized in speech recognition, and its effectiveness was verified. The experiment was divided into isolated word recognition and speech recognition, and in order to conduct the experiment PBW452 was used to test the isolated word recognition. The experiment was conducted on about 50 men and women - divided into 5 groups - and they vocalized 50 words each. As for the continuous speech recognition experiment to be utilized in the materialized stock exchange system, the sentence corpus of 71 stock exchange sentences and speech corpus vocalizing the sentences were collected and used 5 men and women each vocalized a sentence twice. As the result of the experiment, when the Paljongseonggajokyong was used as the consonant, the recognition performance elevated by an average of about 1.45% : and when phone likely unit with Paljongseonggajokyong and auditory phonetic applied simultaneously, was applied, the rate of recognition increased by an average of 1.5% to 2.02%. In the continuous speech recognition experiment, the recognition performance elevated by an average of about 1% to 2% than when the existing 49 or 56 phone likely units were utilized.

The text-to-speech system assessment based on word frequency and word regularity effects (단어빈도와 단어규칙성 효과에 기초한 합성음 평가)

  • Nam Kichun;Choi Wonil;Lee Donghoon;Koo Minmo;Kim Jongjin
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.105-108
    • /
    • 2002
  • In the present study, the intelligibility of the synthesized speech sounds was evaluated by using the psycholinguistic and fMRI techniques, In order to see the difference in recognizing words between the natural and synthesized speech sounds, word regularity and word frequency were varied. The results of Experiment1 and Experiment2 showed that the intelligibility difference of the synthesized speech comes from word regularity. There were smaller activation of the auditory areas in brain and slower recognition time for the regular words.

  • PDF

Glottal Weighted Cepstrum for Robust Speech Recognition (잡음에 강한 음성 인식을 위한 성문 가중 켑스트럼에 관한 연구)

  • 전선도;강철호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.78-82
    • /
    • 1999
  • This paper is a study on weighted cepstrum used broadly for robust speech recognition. Especially, we propose the weighted function of asymmetric glottal pulse shape. which is used for weighted cepstrum extracted by PLP(Perceptual Linear Predictive) based on auditory model. Also, we analyze this glottal weighted cepstrum from the glottal pulse of glottal model in connection with the cepstrum. And we obtain speech features analyzed by both the glottal model and the auditory model. The isolated-word recognition rate is adopted for the test of proposed method in the car moise and street environment. And the performance of glottal weighted cepstrum is compared with both that of weighted cepstrum extracted by LP(Linear Prediction) and that of weighted cepstrum extracted by PLP. The result of computer simulation shows that recognition rate of the proposed glottal weighted cepstrum is better than those of other weighted cepstrums.

  • PDF

A Study on Development of a Hearing Impairment Simulator considering Frequency Selectivity and Asymmetrical Auditory Filter of the Hearing Impaired (난청인의 주파수 선택도와 비대칭적 청각 필터를 고려한 난청 시뮬레이터 개발에 관한 연구)

  • Joo, Sang-Ick;Kang, Hyun-Deok;Song, Young-Rok;Lee, Sang-Min
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.4
    • /
    • pp.831-840
    • /
    • 2010
  • In this paper, we propose a hearing impairment simulator considering reduced frequency selectivity and asymmetrical auditory filter of the hearing impaired, and we verified the reduced frequency selectivity and asymmetrical auditory filter affected in speech perception through experiments. The reduced frequency selectivity has made embodied by spectral smearing using LPC(linear prediction coding). The shapes of auditory filter are asymmetrical different with each center frequency. Hearing impaired person which has hearing loss was differently changed with that of normal hearing people and it has different value for speech of quality through auditory filter. The experiments confirmed subjective test and objective test. The subjective experiments are composed of 4 kinds of tests: pure tone test, SRT(speech reception threshold) test, and WRS(word recognition score) test without spectral smearing, and WRS test with spectral smearing. The experiment of the hearing impairment simulator was performed from 9 subjects who have normal ears. The amount of spectral smearing was controlled by LPC order. The asymmetrical auditory filter of proposed hearing impairment simulator was simulated and then some tests to estimate the filter's performance objectively were performed. The objective experiment as simulated auditory filter's performance evaluation method used PESQ(perceptual evaluation of speech quality) and LLR(log likelihood ratio) for speech through auditory filter. The processed speech was evaluated objective speech quality and distortion using PESQ and LLR value. When hearing loss processed, PESQ and LLR value have big difference according to asymmetrical auditory filter in hearing impairment simulator.

Case Study of Auditory Training for the Acquired Hearing loss Adult with Cochlear Implant (후천성 인공와우 이식 성인의 청능훈련 사례 연구)

  • Hong, Ha Na
    • 재활복지
    • /
    • v.17 no.4
    • /
    • pp.371-382
    • /
    • 2013
  • Recently, the number of those who were transplanted cochlear implants increased as health insurance increases has expanded. Last six years between 2005 to 2009, patients who received a cochlear implant surgery were about 3,300 and number of cochlear implants in adults of them have shown growing aspects. In the case of young children, they actively participated auditory training program after cochlear implant surgery and the studies related to auditory training in child are many, but the studies related to auditory training in adults is insufficient. In this study, we perform the auditory training for the female adult (age 54) received cochlear implant after language acquisition used Ling 6 sounds test, standardized consonants, vowels and sentences listening test and word recognition and confirmation test. As a result after auditory training for 10 weeks, she identified all phonemes in Ling 6 sound test and performed close to 100% in standardized consonants, vowels and sentences listening tests. Also, she improved the ability of real-world environmental sound and real-world words identifications by 57-95%. The results of this study showed the need of auditory training program with systematic and effective planning and considering the characteristics of the individual for adults.