• 제목/요약/키워드: Simulated speech

검색결과 70건 처리시간 0.023초

모의 음성 모델을 이용한 효과적인 구개인두부전증 환자 음성 인식 (Effective Recognition of Velopharyngeal Insufficiency (VPI) Patient's Speech Using Simulated Speech Model)

  • 성미영;권택균;성명훈;김우일
    • 한국정보통신학회논문지
    • /
    • 제19권5호
    • /
    • pp.1243-1250
    • /
    • 2015
  • 본 논문에서는 VPI 환자 음성을 정상인 음성으로 복원하기 위한 기술의 단계로서 효과적인 VPI 음성 인식 기술을 소개한다. 소량의 VPI 환자 음성을 모델 적응에 효과적으로 사용하기 위해 정상인의 모의 음성을 이용하여 화자 적응을 위한 사전 모델로 이용하는 기법을 제안한다. MLLR 기법을 이용한 화자 적응을 통해 평균 83.60%의 인식률을 보이고, 모의 음성 모델을 화자 적응의 사전 모델로 이용함으로써 평균 6.38%의 인식률 향상을 가져온다. 음소 인식 평가 결과는 제안한 화자 적응 방식이 대폭적인 음성 인식 성능 향상을 가져오는 것을 증명한다. 이러한 결과는 본 논문에서 제안하는 모의 음성 모델을 이용한 화자 적응 기법이 대량의 VPI 환자 음성을 취득하기 어려운 조건에서 보다 향상된 성능의 VPI 환자 음성 인식기를 구축하는데 효과적임을 입증한다.

The role of prosody in dialect authentication Simulating Masan dialect with Seoul speech segments

  • Yoon, Kyu-Chul
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2007년도 한국음성과학회 공동학술대회 발표논문집
    • /
    • pp.234-239
    • /
    • 2007
  • The purpose of this paper is to examine the viability of simulating one dialect with the speech segments of another dialect through prosody cloning. The hypothesis is that, among Korean regional dialects, it is not the segmental differences but the prosodic differences that play a major role in authentic dialect perception. This work intends to support the hypothesis by simulating Masan dialect with the speech segments from Seoul dialect. The dialect simulation was performed by transplanting the prosodic features of Masan utterances unto the same utterances produced by a Seoul speaker. Thus, the simulated Masan utterances were composed of Seoul speech segments but their prosody came from the original Masan utterances. The prosodic features involved were the fundamental frequency contour, the segmental durations, and the intensity contour. The simulated Masan utterances were evaluated by four native Masan speakers and the role of prosody in dialect authentication and speech synthesis was discussed.

  • PDF

마이크로폰 배열에서 독립벡터분석 기법을 이용한 잡음음성의 음질 개선 (Microphone Array Based Speech Enhancement Using Independent Vector Analysis)

  • 왕씽양;전성일;배건성
    • 말소리와 음성과학
    • /
    • 제4권4호
    • /
    • pp.87-92
    • /
    • 2012
  • Speech enhancement aims to improve speech quality by removing background noise from noisy speech. Independent vector analysis is a type of frequency-domain independent component analysis method that is known to be free from the frequency bin permutation problem in the process of blind source separation from multi-channel inputs. This paper proposed a new method of microphone array based speech enhancement that combines independent vector analysis and beamforming techniques. Independent vector analysis is used to separate speech and noise components from multi-channel noisy speech, and delay-sum beamforming is used to determine the enhanced speech among the separated signals. To verify the effectiveness of the proposed method, experiments for computer simulated multi-channel noisy speech with various signal-to-noise ratios were carried out, and both PESQ and output signal-to-noise ratio were obtained as objective speech quality measures. Experimental results have shown that the proposed method is superior to the conventional microphone array based noise removal approach like GSC beamforming in the speech enhancement.

Real-time implementation and performance evaluation of speech classifiers in speech analysis-synthesis

  • Kumar, Sandeep
    • ETRI Journal
    • /
    • 제43권1호
    • /
    • pp.82-94
    • /
    • 2021
  • In this work, six voiced/unvoiced speech classifiers based on the autocorrelation function (ACF), average magnitude difference function (AMDF), cepstrum, weighted ACF (WACF), zero crossing rate and energy of the signal (ZCR-E), and neural networks (NNs) have been simulated and implemented in real time using the TMS320C6713 DSP starter kit. These speech classifiers have been integrated into a linear-predictive-coding-based speech analysis-synthesis system and their performance has been compared in terms of the percentage of the voiced/unvoiced classification accuracy, speech quality, and computation time. The results of the percentage of the voiced/unvoiced classification accuracy and speech quality show that the NN-based speech classifier performs better than the ACF-, AMDF-, cepstrum-, WACF- and ZCR-E-based speech classifiers for both clean and noisy environments. The computation time results show that the AMDF-based speech classifier is computationally simple, and thus its computation time is less than that of other speech classifiers, while that of the NN-based speech classifier is greater compared with other classifiers.

DNN-HMM 기반 시스템을 이용한 효과적인 구개인두부전증 환자 음성 인식 (Effective Recognition of Velopharyngeal Insufficiency (VPI) Patient's Speech Using DNN-HMM-based System)

  • 윤기무;김우일
    • 한국정보통신학회논문지
    • /
    • 제23권1호
    • /
    • pp.33-38
    • /
    • 2019
  • 본 논문에서는 효과적으로 VPI 환자 음성을 인식하기 위해 DNN-HMM 하이브리드 구조의 음성 인식 시스템을 구축하고 기존의 GMM-HMM 기반의 음성 인식 시스템과의 성능을 비교한다. 정상인의 깨끗한 음성 데이터베이스를 이용하여 초기 모델을 학습하고 정상인의 VPI 모의 음성을 이용하여 VPI 환자 음성에 대한 화자 인식을 위한 기본 모델을 생성한다. VPI 환자의 화자 적응 시에는 DNN의 각 층 별 가중치 행렬을 부분적으로 학습하여 성능을 관찰한 결과 GMM-HMM 인식기보다 높은 성능을 나타냈다. 성능 향상을 위해 DNN 모델 적응을 적용하고 LIN 기반의 DNN 모델 적용 결과 평균 2.35%의 인식률 향상을 나타냈다. 또한 소량의 데이터를 사용했을 때 GMM-HMM 기반 음성인식 기법에 비해 DNN-HMM 기반 음성 인식 기법이 향상된 VPI 음성 인식 성능을 보인다.

구개인두부전증 환자의 한국어 음성 코퍼스 구축 방안 연구 (Research on Construction of the Korean Speech Corpus in Patient with Velopharyngeal Insufficiency)

  • 이지은;김욱은;김광현;성명훈;권택균
    • Korean Journal of Otorhinolaryngology-Head and Neck Surgery
    • /
    • 제55권8호
    • /
    • pp.498-507
    • /
    • 2012
  • Background and Objectives We aimed to develop a Korean version of the velopharyngeal insufficiency (VPI) speech corpus system. Subjects and Method After developing a 3-channel simultaneous speech recording device capable of recording nasal/oral and normal compound speech separately, voice data were collected from VPI patients aged more than 10 years with/without the history of operation or prior speech therapy. This was compared to a control group for which VPI was simulated by using a french-3 nelaton tube inserted via both nostril through nasopharynx and pulling the soft palate anteriorly in varying degrees. The study consisted of three transcriptors: a speech therapist transcribed the voice file into text, a second transcriptor graded speech intelligibility and severity and the third tagged the types and onset times of misarticulation. The database were composed of three main tables regarding (1) speaker's demographics, (2) condition of the recording system and (3) transcripts. All of these were interfaced with the Praat voice analysis program, which enables the user to extract exact transcribed phrases for analysis. Results In the simulated VPI group, the higher the severity of VPI, the higher the nasalance score was obtained. In addition, we could verify the vocal energy that characterizes hypernasality and compensation in nasal/oral and compound sounds spoken by VPI patients as opposed to that characgerizes the normal control group. Conclusion With the Korean version of VPI speech corpus system, patients' common difficulties and speech tendencies in articulation can be objectively evaluated. Comparing these data with those of the normal voice, mispronunciation and dysarticulation of patients with VPI can be corrected.

인공 청각 장치의 음성신호 처리와 자극방법의 시뮬레이션 (Simulation of speech processing and coding strategy for cochlear implants)

  • 김영훈;박광석
    • 대한의용생체공학회:학술대회논문집
    • /
    • 대한의용생체공학회 1991년도 추계학술대회
    • /
    • pp.30-33
    • /
    • 1991
  • The object of speech processor for cochlear implants is to deliver speech information to the central nerve system. In this study we have presented the method which simulate speech processing and coding strategy for cochlear implants and simulated two different processing methods to the 12 adults with normal ears. The formant sinusoidal coding was better than the formant pulse coding In the consonant perception test and learning effects.(p < 0.05)

  • PDF

Speech Perception and Gap Detection Performance of Single-Sided Deafness under Noisy Conditions

  • Kwak, Chanbeom;Kim, Saea;Lee, Jihyeon;Seo, Youngjoon;Kong, Taehoon;Han, Woojae
    • Journal of Audiology & Otology
    • /
    • 제23권4호
    • /
    • pp.197-203
    • /
    • 2019
  • Background and Objectives: Many studies have reported no benefit of sound localization, but improved speech understanding in noise after treating patients with single-sided deafness (SSD). Furthermore, their performances provided a large individual difference. The present study aimed to measure the ability of speech perception and gap detection in noise for the SSD patients to better understand their hearing nature. Subjects and Methods: Nine SSD patients with different onset and period of hearing deprivation and 20 young adults with normal hearing and simulated conductive hearing loss as the control groups conducted speech perception in noise (SPIN) and Gap-In-Noise (GIN) tests. The SPIN test asked how many presented sentences were understood at the +5 and -5 dB signal-to-noise ratio. The GIN test was asked to find the shortest gap in white noise with different lengths in the gap. Results: Compared to the groups with normal hearing and simulated instant hearing loss, the SSD group showed much poor performance in both SPIN and GIN tests while supporting central auditory plasticity of the SSD patients. Rather than a longer period of deafness, the large individual variance indicated that the congenital SSD patients showed better performance than the acquired SSD patients in two measurements. Conclusions: The results suggested that comprehensive assessments should be implemented before any treatment of the SSD patient considering their onset time and etiology, although these findings need to be generalized with a large sample size.

Speech Perception and Gap Detection Performance of Single-Sided Deafness under Noisy Conditions

  • Kwak, Chanbeom;Kim, Saea;Lee, Jihyeon;Seo, Youngjoon;Kong, Taehoon;Han, Woojae
    • 대한청각학회지
    • /
    • 제23권4호
    • /
    • pp.197-203
    • /
    • 2019
  • Background and Objectives: Many studies have reported no benefit of sound localization, but improved speech understanding in noise after treating patients with single-sided deafness (SSD). Furthermore, their performances provided a large individual difference. The present study aimed to measure the ability of speech perception and gap detection in noise for the SSD patients to better understand their hearing nature. Subjects and Methods: Nine SSD patients with different onset and period of hearing deprivation and 20 young adults with normal hearing and simulated conductive hearing loss as the control groups conducted speech perception in noise (SPIN) and Gap-In-Noise (GIN) tests. The SPIN test asked how many presented sentences were understood at the +5 and -5 dB signal-to-noise ratio. The GIN test was asked to find the shortest gap in white noise with different lengths in the gap. Results: Compared to the groups with normal hearing and simulated instant hearing loss, the SSD group showed much poor performance in both SPIN and GIN tests while supporting central auditory plasticity of the SSD patients. Rather than a longer period of deafness, the large individual variance indicated that the congenital SSD patients showed better performance than the acquired SSD patients in two measurements. Conclusions: The results suggested that comprehensive assessments should be implemented before any treatment of the SSD patient considering their onset time and etiology, although these findings need to be generalized with a large sample size.

모의 지능로봇에서 음성신호에 의한 감정인식 (Speech Emotion Recognition by Speech Signals on a Simulated Intelligent Robot)

  • 장광동;권오욱
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2005년도 추계 학술대회 발표논문집
    • /
    • pp.163-166
    • /
    • 2005
  • We propose a speech emotion recognition method for natural human-robot interface. In the proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes pitch, jitter, duration, and rate of speech. Finally a patten classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5different directions. Experimental results show that the proposed method yields 59% classification accuracy while human classifiers give about 50%accuracy, which confirms that the proposed method achieves performance comparable to a human.

  • PDF