• 제목/요약/키워드: Speech Synthesize

검색결과 41건 처리시간 0.021초

청각 모델에 기초한 음성 특징 추출에 관한 연구 (A study on the speech feature extraction based on the hearing model)

  • 김바울;윤석현;홍광석;박병철
    • 전자공학회논문지B
    • /
    • 제33B권4호
    • /
    • pp.131-140
    • /
    • 1996
  • In this paper, we propose the method that extracts the speech feature using the hearing model through signal precessing techniques. The proposed method includes following procedure ; normalization of the short-time speech block by its maximum value, multi-resolution analysis using the discrete wavelet transformation and re-synthesize using thediscrete inverse wavelet transformation, differentiation after analysis and synthesis, full wave rectification and integration. In order to verify the performance of the proposed speech feature in the speech recognition task, korean digita recognition experiments were carried out using both the dTW and the VQ-HMM. The results showed that, in case of using dTW, the recognition rates were 99.79% and 90.33% for speaker-dependent and speaker-independent task respectively and, in case of using VQ-HMM, the rate were 96.5% and 81.5% respectively. And it indicates that the proposed speech feature has the potentials to use as a simple and efficient feature for recognition task.

  • PDF

한국어 음성합성기의 성능 향상을 위한 합성 단위의 유무성음 분리 (Separation of Voiced Sounds and Unvoiced Sounds for Corpus-based Korean Text-To-Speech)

  • 홍문기;신지영;강선미
    • 음성과학
    • /
    • 제10권2호
    • /
    • pp.7-25
    • /
    • 2003
  • Predicting the right prosodic elements is a key factor in improving the quality of synthesized speech. Prosodic elements include break, pitch, duration and loudness. Pitch, which is realized by Fundamental Frequency (F0), is the most important element relating to the quality of the synthesized speech. However, the previous method for predicting the F0 appears to reveal some problems. If voiced and unvoiced sounds are not correctly classified, it results in wrong prediction of pitch, wrong unit of triphone in synthesizing the voiced and unvoiced sounds, and the sound of click or vibration. This kind of feature is usual in the case of the transformation from the voiced sound to the unvoiced sound or from the unvoiced sound to the voiced sound. Such problem is not resolved by the method of grammar, and it much influences the synthesized sound. Therefore, to steadily acquire the correct value of pitch, in this paper we propose a new model for predicting and classifying the voiced and unvoiced sounds using the CART tool.

  • PDF

음성압축을 위한 전처리기법의 비교 분석에 관한 연구 (A Study on a Analysis and Comparison of Preprocessing Technique for the Speech Compression)

  • 장경아;민소연;배명진
    • 음성과학
    • /
    • 제10권4호
    • /
    • pp.125-136
    • /
    • 2003
  • Speech coding techniques have been studied to reduce the complexity and bit rate but also to improve the sound quality. CELP type vocoder, has used as a one of standard, supports the great sound quality even low bit rate. In this paper, the preprocessing of input speech to reduce the bit rate is the different with the conventional vocoder. The different kinds of parameter are used for the preprocessing so this paper is compared with theses parameters for finding the more appropriate parameter for the vocoder. The parameters are used to synthesize the speech not to encode or decode for coding technique so we proposed the simple algorithm not to have the influence on the processing time or the computation time. The parameters in used the preprocessing step are speaking rate, duration and PSOLA technique.

  • PDF

음성합성을 이용한 병적 음성의 치료 결과에 대한 예측 (Prediction of Post-Treatment Outcome of Pathologic Voice Using Voice Synthesis)

  • 이주환;최홍식;김영호;김한수;최현승;김광문
    • 대한후두음성언어의학회지
    • /
    • 제14권1호
    • /
    • pp.30-39
    • /
    • 2003
  • Background and Objectives : Patients with pathologic voice often concern about recovery of voice after surgery. In our investigation, we give controlled values of three parameters of voice synthesis program of Dr. Speech Science. such as jitter, shimmer, and NNE(normalized noise energy) which characterize someone's voice from others and deviced a method to synthesize the predicted voice after performing operation. Subjects and Method : Values of vocal jitter, vocal shimmer, and glottal noise were measured with voices of 10 vocal cord Paralysis and 10 vocal Polyp Patients 1 week Prior to and 1 month after the surgery. With Dr. Speech science voice synthesis program we synthesized 'ae' vowel which is closely identical to preoperative and post-operative voice of the patients by controlling the values of jitter, shimmer, and glottal noise. then we analyzed the synthesized voices and compared with pre and post-operative voice. Results : 1) After inputting the preoperative and corrected values of jitter, shimmer, and glottal noise into the voice synthesis Program, voices identical to vocal Polyp Patients' Pre- and Postoperative voices withiin statistical significance were synthesized 2) After elimination of synergistic effects between three paramenter, we were able to synthesize voice identical to vocal paralysis patients' preoperative voices. 3) After inputting only slightly increased jitter, shimmer into the synthesis program, we were able to synthesize voice identical to vocal cord paralysis patients' postoperative voices. Conclusion : Voices synthesized with Dr. Speech science program were identical to patients' actual pre and postoperative voice, and clinicians will be able to give the patients more information and thus increased patients cooperability can be expected.

  • PDF

LPC Vocoder 의 Excitation Source 개선에 관한 연구 (An Enhanced Excitation Source in LPC Vocoder)

  • 전지하;이근영
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1987년도 전기.전자공학 학술대회 논문집(II)
    • /
    • pp.881-883
    • /
    • 1987
  • This paper decribes a new technique for the generation of excitation sources in LPC system. We synthesize a speech signal using several excitation sources, according to residual signal energy and ZCR(zero Crossing Rate). One of the excitation sources mix the double differentiated glottal wave form source and noise source. As a result, we got improved speech signal than that produced by conventional LPC system.

  • PDF

Artsim'을 이용한 모음의 조음점 추정에 관한 연구 (Estimation of Articulatory Characteristics of Vowels Using 'ArtSim')

  • 김대현;조철우
    • 대한음성학회지:말소리
    • /
    • 제35_36호
    • /
    • pp.121-129
    • /
    • 1998
  • In this paper, articulatory simulator 'Artsim' is used as a tool for the experiments to examine the articulatory characteristics of 6 different vowels. Each vowels are defined by some articulatory points from their vocal tract area functions and shapes of tongues. Each points are varied systematically to synthesize vowels and the synthesized sound is evaluated by human listners. Finally distributions of each vowels within vowel space is obtained. From the experimental results it is verified that our articulatory simulator can be used effectively to investigate the articulatory characteristics of speech.

  • PDF

연속 음성으로부터 추출한 CVC 음성세그먼트 기반의 음성합성 (Speech Synthesis Based on CVC Speech Segments Extracted from Continuous Speech)

  • 김재홍;조관선;이철희
    • 한국음향학회지
    • /
    • 제18권7호
    • /
    • pp.10-16
    • /
    • 1999
  • 본 논문에서는 설계하지 않은 연속 음성 코퍼스로부터 추출된 CVC 음성 세그먼트를 사용하는 연결 기반 음성 합성기를 제안한다. 연속 음성은 각 음운간의 상호조음효과가 비교적 잘 반영되고, 자연스러운 억양 변화를 포함하고 있으므로 이를 적절하게 활용할 수 있는 합성 단위를 선택하면 자연스런 음성합성이 가능하다. 여러 가지 합성단위 가운데 CVC 합성 단위는 자음의 안정 부분에서 접속이 일어나므로 연결부에서의 음질 저하가 적고, 전후 자음과 모음간의 조음 현상을 잘 반영하는 장점이 있다. 본 논문에서는 CVC 합성 단위를 사용하는 경우 나타나는 문장 세그먼트들의 조합을 4가지로 분류하여 각각의 통계적 특성과 합성음성의 품질을 분석하고, CVC에 근거한 새로운 복합 합성 단위를 사용하는 방식을 제안한다. 제안된 방식을 사용하여 설계하지 않은 연속 음성 코퍼스로부터 CVC 음성 세그먼트를 추출하여 다양한 예제 문장을 합성하였다. 만일 필요한 CVC 음성 세그먼트가 음성 코퍼스에 존재하지 않는 경우 반음절 음성 세그먼트로 대치하여 합성하였다. 실험 결과 약 100 Mbytes의 연속 음성 코퍼스로 비교적 자연스러운 음성합성이 가능함을 알 수 있었다.

  • PDF

가변 운율 모델링을 이용한 고음질 감정 음성합성기 구현에 관한 연구 (A Study on Implementation of Emotional Speech Synthesis System using Variable Prosody Model)

  • 민소연;나덕수
    • 한국산학기술학회논문지
    • /
    • 제14권8호
    • /
    • pp.3992-3998
    • /
    • 2013
  • 본 논문은 고음질의 대용량 코퍼스 기반 음성 합성기에 감정 음성 코퍼스를 추가하여 보다 다양한 합성음을 생성할 수 있는 방법에 관한 것이다. 파형 접합형 합성기에서 사용할 수 있는 형태로 감정 음성 코퍼스를 구축하여 기존의 일반 음성 코퍼스와 동일한 합성단위 선택과정을 통해 합성음을 생성할 수 있도록 구현하였다. 감정 음성 합성을 위해 태그를 사용하여 텍스트를 입력하고, 억양구 단위로 일치하는 데이터가 존재하는 경우 감정 음성으로 합성하고, 그렇지 않은 경우 일반 음성으로 합성하도록 하였다. 그리고 음성에서 운율을 구성하는 요소로 휴지기(break)가 있는데, 감정 음성의 휴지기는 일반 음성보다 불규칙한 특성이 있다. 따라서 합성기에서 생성되는 휴지기 정보를 감정 음성 합성에 그대로 사용하는 것이 어려워진다. 이 문제를 해결하기 위해 가변 휴지기(Variable break)[3] 모델링을 적용하였다. 실험은 일본어 합성기를 사용하였고, 그 결과 일반 음성의 휴지기 예측 모듈을 그대로 사용하면서 자연스러운 감정 합성음을 얻을 수 있었다.

파형보간 코더에서 파라미터간 거리차를 이용한 가변비트율 기법 (A New Variable Bit Rate Scheme for Waveform Interpolative Coders)

  • 양희식;정상배;한민수
    • 대한음성학회지:말소리
    • /
    • 제65호
    • /
    • pp.81-91
    • /
    • 2008
  • In this paper, we propose a new variable bit-rate speech coder based on the waveform interpolation concept. After the coder extracted all parameters, the amounts of the distortions between the current and the predicted parameters which are estimated by extrapolation using past two parameters are measured for all parameters. A parameter would not be transmitted unless the distortion exceeds the preset threshold. At the decoder side, the non-transmitted parameter is reconstructed by extrapolation with past two parameters used to synthesize signals. In this way, we can reduce 26% of the total bit rate while retaining the speech quality degradation below 0.1 PESQ score.

  • PDF

Discrimination of Synthesized English Vowels by American and Korean Listeners

  • Yang, Byung-Gon
    • 음성과학
    • /
    • 제13권1호
    • /
    • pp.7-27
    • /
    • 2006
  • This study explored the discrimination of synthesized English vowel pairs by twenty-seven American and Korean, male and female listeners. The average formant values of nine monophthongs produced by ten American English male speakers were employed to synthesize the vowels. Then, subjects were instructed explicitly to respond to AX discrimination tasks in which the standard vowel was followed by another one with the increment or decrement of the original formant values. The highest and lowest formant values of the same vowel quality were collected and compared to examine patterns of vowel discrimination. Results showed that the American and Korean groups discriminated the vowel pairs almost identically and their center formant frequency values of the high and low boundary fell almost exactly on those of the standards. In addition, the acceptable range of the same vowel quality was similar among the language and gender groups. The acceptable thresholds of each vowel formed oval to maintain perceptual contrast from adjacent vowels. The results suggested that nonnative speakers with high English proficiency could match native speakers' performance in discriminating vowel pairs with a shorter inter-stimulus interval. Pedagogical implications of those findings are discussed.

  • PDF