• 제목/요약/키워드: Speech Training

검색결과 579건 처리시간 0.023초

말운동장애인을 위한 시-청각 단서 제공 읽기 훈련 프로그램 개발 (Development of a Reading Training Software offering Visual-Auditory Cue for Patients with Motor Speech Disorder)

  • 방동혁;전유용;양동권;길세기;권미선;이상민
    • 대한의용생체공학회:의공학회지
    • /
    • 제29권4호
    • /
    • pp.307-315
    • /
    • 2008
  • In this paper, we developed a visual-auditory cue software for reading training of motor speech disorder patients. Motor speech disorder patients can use the visual and/or auditory cues for reading training and improving their symptom. The software provides some sentences with visual-auditory cues. Our sentences used for reading training are adequately comprised on modulation training according to a professional advice in speech therapy field. To ameliorate reading skills we developed two algorithms, first one is automatically searching the starting time of speech spoken by patients and the other one is removing auditory-cue from the recorded speech that recorded at the same time. The searching of speech starting time was experimented by 10 sentences per 6 subjects in four kinds of noisy environments thus the results is that $7.042{\pm}8.99[ms]$ error was detected. The experiment of the cancellation algorithm of auditory-cue was executed from 6 subjects with 1 syllable speech. The result takes improved the speech recognition rate $25{\pm}9.547[%]$ between before and after cancellation of auditory-cue in speech. User satisfaction index of the developed program was estimated as good.

Effect of Music Training on Categorical Perception of Speech and Music

  • L., Yashaswini;Maruthy, Sandeep
    • Journal of Audiology & Otology
    • /
    • 제24권3호
    • /
    • pp.140-148
    • /
    • 2020
  • Background and Objectives: The aim of this study is to evaluate the effect of music training on the characteristics of auditory perception of speech and music. The perception of speech and music stimuli was assessed across their respective stimulus continuum and the resultant plots were compared between musicians and non-musicians. Subjects and Methods: Thirty musicians with formal music training and twenty-seven non-musicians participated in the study (age: 20 to 30 years). They were assessed for identification of consonant-vowel syllables (/da/ to /ga/), vowels (/u/ to /a/), vocal music note (/ri/ to /ga/), and instrumental music note (/ri/ to /ga/) across their respective stimulus continuum. The continua contained 15 tokens with equal step size between any adjacent tokens. The resultant identification scores were plotted against each token and were analyzed for presence of categorical boundary. If the categorical boundary was found, the plots were analyzed by six parameters of categorical perception; for the point of 50% crossover, lower edge of categorical boundary, upper edge of categorical boundary, phoneme boundary width, slope, and intercepts. Results: Overall, the results showed that both speech and music are perceived differently in musicians and non-musicians. In musicians, both speech and music are categorically perceived, while in non-musicians, only speech is perceived categorically. Conclusions: The findings of the present study indicate that music is perceived categorically by musicians, even if the stimulus is devoid of vocal tract features. The findings support that the categorical perception is strongly influenced by training and results are discussed in light of notions of motor theory of speech perception.

Effect of Music Training on Categorical Perception of Speech and Music

  • L., Yashaswini;Maruthy, Sandeep
    • 대한청각학회지
    • /
    • 제24권3호
    • /
    • pp.140-148
    • /
    • 2020
  • Background and Objectives: The aim of this study is to evaluate the effect of music training on the characteristics of auditory perception of speech and music. The perception of speech and music stimuli was assessed across their respective stimulus continuum and the resultant plots were compared between musicians and non-musicians. Subjects and Methods: Thirty musicians with formal music training and twenty-seven non-musicians participated in the study (age: 20 to 30 years). They were assessed for identification of consonant-vowel syllables (/da/ to /ga/), vowels (/u/ to /a/), vocal music note (/ri/ to /ga/), and instrumental music note (/ri/ to /ga/) across their respective stimulus continuum. The continua contained 15 tokens with equal step size between any adjacent tokens. The resultant identification scores were plotted against each token and were analyzed for presence of categorical boundary. If the categorical boundary was found, the plots were analyzed by six parameters of categorical perception; for the point of 50% crossover, lower edge of categorical boundary, upper edge of categorical boundary, phoneme boundary width, slope, and intercepts. Results: Overall, the results showed that both speech and music are perceived differently in musicians and non-musicians. In musicians, both speech and music are categorically perceived, while in non-musicians, only speech is perceived categorically. Conclusions: The findings of the present study indicate that music is perceived categorically by musicians, even if the stimulus is devoid of vocal tract features. The findings support that the categorical perception is strongly influenced by training and results are discussed in light of notions of motor theory of speech perception.

The Effect of the Number of Training Data on Speech Recognition

  • Lee, Chang-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • 제28권2E호
    • /
    • pp.66-71
    • /
    • 2009
  • In practical applications of speech recognition, one of the fundamental questions might be on the number of training data that should be provided for a specific task. Though plenty of training data would undoubtedly enhance the system performance, we are then faced with the problem of heavy cost. Therefore, it is of crucial importance to determine the least number of training data that will afford a certain level of accuracy. For this purpose, we investigate the effect of the number of training data on the speaker-independent speech recognition of isolated words by using FVQ/HMM. The result showed that the error rate is roughly inversely proportional to the number of training data and grows linearly with the vocabulary size.

어린이 음성인식을 위한 동적 가중 손실 기반 도메인 적대적 훈련 (Dynamically weighted loss based domain adversarial training for children's speech recognition)

  • 마승희
    • 한국음향학회지
    • /
    • 제41권6호
    • /
    • pp.647-654
    • /
    • 2022
  • 어린이 음성인식의 활용 분야가 증가하고 있지만, 양질의 데이터 부족은 어린이 음성인식 성능 향상의 걸림돌이 되고 있다. 본 논문은 성인의 음성 데이터를 추가로 사용하여 어린이 음성인식 성능을 개선하는 방법을 새롭게 제안한다. 제안하는 방법은 성인 학습 데이터양이 증가할수록 커지는 연령 간 데이터 불균형을 효과적으로 다루기 위해 dynamically weighted loss를 사용하여 트랜스포머 기반 도메인 적대적 훈련하는 방식이다. 구체적으로, 학습 중 미니 배치 내 클래스 불균형 정도를 수치화하고, 데이터가 적을수록 큰 가중치를 갖도록 손실함수를 정의하여 사용하였다. 실험에서는 성인과 어린이 학습 데이터 간 비대칭성에 따른 제안된 도메인 적대적 훈련의 효용성을 검증하였다. 실험 결과, 학습 데이터 내 연령 간 비대칭이 발생하는 모든 조건에서 제안하는 방법이 기존 도메인 적대적 훈련 방식보다 높은 어린이 음성인식 성능을 가짐을 확인할 수 있었다.

The Effects of Pitch Increasing Training (PIT) on Voice and Speech of a Patient with Parkinson's Disease: A Pilot Study

  • Lee, Ok-Bun;Jeong, Ok-Ran;Shim, Hong-Im;Jeong, Han-Jin
    • 음성과학
    • /
    • 제13권1호
    • /
    • pp.95-105
    • /
    • 2006
  • The primary goal of therapeutic intervention in dysarthric speakers is to increase the speech intelligibility. Decision of critical features to increase the intelligibility is very important in speech therapy. The purpose of this study is to know the effects of pitch increasing training (PIT) on speech of a subject with Parkinson's disease (PD). The PIT program is focused on increasing pitch while a vowel is sustained with the same loudness. The loudness level is somewhat higher than that of the habitual loudness. A 67-year-old female with PD participated in the study. Speech therapy was conducted for 4 sessions (200 minutes) for one week. Before and after the treatment, acoustic, perceptual and speech naturalness evaluation was peformed for data analysis. Speech and voice satisfaction index (SVSI) was obtained after the treatment. Results showed Improvements in voice quality and speech naturalness. In addition, the patient's satisfaction ratings (SVSI) indicated a positive relationship between improved speech production and their (the patient and care-givers) satisfaction.

  • PDF

Robust Histogram Equalization Using Compensated Probability Distribution

  • Kim, Sung-Tak;Kim, Hoi-Rin
    • 대한음성학회지:말소리
    • /
    • 제55권
    • /
    • pp.131-142
    • /
    • 2005
  • A mismatch between the training and the test conditions often causes a drastic decrease in the performance of the speech recognition systems. In this paper, non-linear transformation techniques based on histogram equalization in the acoustic feature space are studied for reducing the mismatched condition. The purpose of histogram equalization(HEQ) is to convert the probability distribution of test speech into the probability distribution of training speech. While conventional histogram equalization methods consider only the probability distribution of a test speech, for noise-corrupted test speech, its probability distribution is also distorted. The transformation function obtained by this distorted probability distribution maybe bring about miss-transformation of feature vectors, and this causes the performance of histogram equalization to decrease. Therefore, this paper proposes a new method of calculating noise-removed probability distribution by using assumption that the CDF of noisy speech feature vectors consists of component of speech feature vectors and component of noise feature vectors, and this compensated probability distribution is used in HEQ process. In the AURORA-2 framework, the proposed method reduced the error rate by over $44\%$ in clean training condition compared to the baseline system. For multi training condition, the proposed methods are also better than the baseline system.

  • PDF

훈련음성 데이터에 적응시킨 필터뱅크 기반의 MFCC 특징파라미터를 이용한 전화음성 연속숫자음의 인식성능 향상에 관한 연구 (A study on the recognition performance of connected digit telephone speech for MFCC feature parameters obtained from the filter bank adapted to training speech database)

  • 정성윤;김민성;손종목;배건성;강점자
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2003년도 5월 학술대회지
    • /
    • pp.119-122
    • /
    • 2003
  • In general, triangular shape filters are used in the filter bank when we get the MFCCs from the spectrum of speech signal. In [1], a new feature extraction approach is proposed, which uses specific filter shapes in the filter bank that are obtained from the spectrum of training speech data. In this approach, principal component analysis technique is applied to the spectrum of the training data to get the filter coefficients. In this paper, we carry out speech recognition experiments, using the new approach given in [1], for a large amount of telephone speech data, that is, the telephone speech database of Korean connected digit released by SITEC. Experimental results are discussed with our findings.

  • PDF

청각 장애인용 통합형 발음 훈련 기기의 개발 (Development of Integrated Speech Training Aids for Hearing Impaired)

  • 박상희;김동준
    • 대한의용생체공학회:의공학회지
    • /
    • 제13권4호
    • /
    • pp.275-284
    • /
    • 1992
  • Development of Integrated Speech Training Aids for Hearing Impaired In this study, a spepch lralnlng aids that can do real-time display of vocal tract shape and other speech parameters together in a single system is implemenLed and self-training program for this system is developed. To estimate vocal tract shape, speech production process is assumed to be AR model. Through LPC analysis, vocal tract shape, intensity, and log spcclrum are calculated. And, fundamental frequency and nasality are measured using vibration sensors.

  • PDF

분산 음성인식 시스템의 성능향상을 위한 음소 빈도 비율에 기반한 VQ 코드북 설계 (A VQ Codebook Design Based on Phonetic Distribution for Distributed Speech Recognition)

  • 오유리;윤재삼;이길호;김홍국;류창선;구명완
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2006년도 춘계 학술대회 발표논문집
    • /
    • pp.37-40
    • /
    • 2006
  • In this paper, we propose a VQ codebook design of speech recognition feature parameters in order to improve the performance of a distributed speech recognition system. For the context-dependent HMMs, a VQ codebook should be correlated with phonetic distributions in the training data for HMMs. Thus, we focus on a selection method of training data based on phonetic distribution instead of using all the training data for an efficient VQ codebook design. From the speech recognition experiments using the Aurora 4 database, the distributed speech recognition system employing a VQ codebook designed by the proposed method reduced the word error rate (WER) by 10% when compared with that using a VQ codebook trained with the whole training data.

  • PDF