• 제목/요약/키워드: Speech Synthesis

검색결과 381건 처리시간 0.03초

Analysis- By-Synthesis/OverLap- Add( ABS/OLA) Sinusoidal Model 을 이용한 음성변환과 연결음성합성 (Speech Modification and Concatenative Speech Synthesis by using Analysis-By-Synthesis/OverLap-Add(ABS/OLA) Sinusoidal Model)

  • 구자형
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1998년도 제15회 음성통신 및 신호처리 워크샵(KSCSP 98 15권1호)
    • /
    • pp.339-343
    • /
    • 1998
  • Sinusoidal model 은 음성신호처리의 넓은 분야에 적용되고 있는 방법으로 고음질의 합성음을 생성해 낼 수 있고, 조작이 용이하다는 장점을 가지고 있다. 본 논문에서는 Analysis-by-synthesis/Overlap-add Sinusoidal model 이라는 방법을 이용하여 시간축 변환과 dam성 변환을 수행하였다. 특히 본 논문에서는 음질향상을 위하여 시간축 변환시에는 정적인 구간과 변화하는 구간을 구별하여 서로 다른 시간축 변환비를 이용하였고, 기존의 LPC 방법에 비해 스펙트럼 포락선을 보다 잘 추정하는 Improved Cepstrum을 이용하여 음정변환에 적용하였다. 또 서로 다른 문맥에서 얻어진 음성단위들을 결합할 때 생기는 위상차이를 극복하기 위하여, 기본주파수 성분이 일치하도록 시간축을 이동하여 합성하였다. 실험결과 본 논문에서 적용한 방법들을 통해 기존 방식에 비해 개선된 음질을 얻을 수 있었다.

  • PDF

정서음성 합성을 위한 예비연구 (Preliminary Study on Synthesis of Emotional Speech)

  • 한영호;이서배;이정철;김형순
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2003년도 10월 학술대회지
    • /
    • pp.181-184
    • /
    • 2003
  • This paper explores the perceptual relevance of acoustical correlates of emotional speech by using formant synthesizer. The focus is on the role of mean pitch, pitch range, speed rate and phonation type when it comes to synthesizing emotional speech. The result of this research is backing up the traditional impressionistic observations. However it suggests that some phonation types should be synthesized with further refinement.

  • PDF

한-일 호텔예약 음성번역 시스템 - 한국 프론트데스트 측 - (Korean-Japanese Speech Translation System for Hotel Reservation - Korean front desk side -)

  • 이영직;김영섬;김회린;류준형;이정철;한남용;안영목;최운천;최운천
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1995년도 제12회 음성통신 및 신호처리 워크샵 논문집 (SCAS 12권 1호)
    • /
    • pp.204-207
    • /
    • 1995
  • Recently, ETRI developed a Korean-Japanese speech translation system for Korean front de나 side in hotel reservation task. The system consists of three sub-systems each of which is responsible for speech recognition, machine translation, and speech synthesis. This paper introduces the background of the system development and describes the functions of the sub-systems.

  • PDF

Chasing ideas in phonetics

  • Ladefoged, Peter
    • 음성과학
    • /
    • 제5권2호
    • /
    • pp.7-16
    • /
    • 1999
  • Starting as a poet, I learned about the sounds of words with David Abercrombie. Then, remembering my background in physics, I moved to studying acoustic phonetics and speech synthesis. From there I learned about psychology and how. to test perceptual theories. A meeting with a physiologist led to work on the use of the respiratory muscles in speech. Later I landed in Africa teaching English phonetics and learning about African languages. When I went to UCLA to set up a lab I was able to find bright students who helped make computer models of the vocal tract and taught me linguistic theory. And I was able to continue wandering around the world, describing the sounds of a wide range of languages.

  • PDF

스펙트럼의 변동계수를 이용한 잡음에 강인한 음성 구간 검출 (Noise-Robust Speech Detection Using The Coefficient of Variation of Spectrum)

  • 김영민;한민수
    • 대한음성학회지:말소리
    • /
    • 제48호
    • /
    • pp.107-116
    • /
    • 2003
  • This paper deals with a new parameter for voice detection which is used for many areas of speech engineering such as speech synthesis, speech recognition and speech coding. CV (Coefficient of Variation) of speech spectrum as well as other feature parameters is used for the detection of speech. CV is calculated only in the specific range of speech spectrum. Average magnitude and spectral magnitude are also employed to improve the performance of detector. From the experimental results the proposed voice detector outperformed the conventional energy-based detector in the sense of error measurements.

  • PDF

MPEG-4TTS 현황 및 전망

  • 한민수
    • 전자공학회지
    • /
    • 제24권9호
    • /
    • pp.91-98
    • /
    • 1997
  • Text-to-Speech(WS) technology has been attracting a lot of interest among speech engineers because of its own benefits. Namely, the possible application areas of talking computers, emergency alarming systems in speech, speech output devices for speech-impaired, and so on. Hence, many researchers have made significant progresses in the speech synthesis techniques in the sense of their own languages and as a result, the quality of current speech synthesizers are believed to be acceptable to normal users. These are partly why the MPEG group had decided to include the WS technology as one of its MPEG-4 functionalities. ETRI has made major contributions to the current MPEG-4 775 appearing in various MPEG-4 documents with relatively minor contributions from AT&T and NW. Main MPEG-4 functionalities presently available are; 1) use of original prosody for synthesized speech output, 2) trick mode functions for general users without breaking synthesized speech prosody, 3) interoperability with Facial Animation(FA) tools, and 4) dubbing a moving/anlmated picture with lip-shape pattern informations.

  • PDF

음성기반 멀티모달 사용자 인터페이스의 사용성 평가 방법론 (Usability Test Guidelines for Speech-Oriented Multimodal User Interface)

  • 홍기형
    • 대한음성학회지:말소리
    • /
    • 제67호
    • /
    • pp.103-120
    • /
    • 2008
  • Basic components for multimodal interface, such as speech recognition, speech synthesis, gesture recognition, and multimodal fusion, have their own technological limitations. For example, the accuracy of speech recognition decreases for large vocabulary and in noisy environments. In spite of those technological limitations, there are lots of applications in which speech-oriented multimodal user interfaces are very helpful to users. However, in order to expand application areas for speech-oriented multimodal interfaces, we have to develop the interfaces focused on usability. In this paper, we introduce usability and user-centered design methodology in general. There has been much work for evaluating spoken dialogue systems. We give a summary for PARADISE (PARAdigm for Dialogue System Evaluation) and PROMISE (PROcedure for Multimodal Interactive System Evaluation) that are the generalized evaluation frameworks for voice and multimodal user interfaces. Then, we present usability components for speech-oriented multimodal user interfaces and usability testing guidelines that can be used in a user-centered multimodal interface design process.

  • PDF

음질 및 속도 향상을 위한 선형 스펙트로그램 활용 Text-to-speech (Text-to-speech with linear spectrogram prediction for quality and speed improvement)

  • 윤혜빈
    • 말소리와 음성과학
    • /
    • 제13권3호
    • /
    • pp.71-78
    • /
    • 2021
  • 인공신경망에 기반한 대부분의 음성 합성 모델은 고음질의 자연스러운 발화를 생성하기 위해 보코더 모델을 사용한다. 보코더 모델은 멜 스펙트로그램 예측 모델과 결합하여 멜 스펙트로그램을 음성으로 변환한다. 그러나 보코더 모델을 사용할 경우에는 많은 양의 컴퓨터 메모리와 훈련 시간이 필요하며, GPU가 제공되지 않는 실제 서비스 환경에서 음성 합성이 오래 걸린다는 단점이 있다. 기존의 선형 스펙트로그램 예측 모델에서는 보코더 모델을 사용하지 않으므로 이 문제가 발생하지 않지만, 대신에 고품질의 음성을 생성하지 못한다. 본 논문은 뉴럴넷 기반 보코더를 사용하지 않으면서도 양질의 음성을 생성하는 Tacotron 2 & Transformer 기반의 선형 스펙트로그램 예측 모델을 제시한다. 본 모델의 성능과 속도 측정 실험을 진행한 결과, 보코더 기반 모델에 비해 성능과 속도 면에서 조금 더 우세한 점을 보였으며, 따라서 고품질의 음성을 빠른 속도로 생성하는 음성 합성 모델 연구의 발판 역할을 할 것으로 기대한다.

Sums-of-Products Models for Korean Segment Duration Prediction

  • Chung, Hyun-Song
    • 음성과학
    • /
    • 제10권4호
    • /
    • pp.7-21
    • /
    • 2003
  • Sums-of-Products models were built for segment duration prediction of spoken Korean. An experiment for the modelling was carried out to apply the results to Korean text-to-speech synthesis systems. 670 read sentences were analyzed. trained and tested for the construction of the duration models. Traditional sequential rule systems were extended to simple additive, multiplicative and additive-multiplicative models based on Sums-of-Products modelling. The parameters used in the modelling include the properties of the target segment and its neighbors and the target segment's position in the prosodic structure. Two optimisation strategies were used: the downhill simplex method and the simulated annealing method. The performance of the models was measured by the correlation coefficient and the root mean squared prediction error (RMSE) between actual and predicted duration in the test data. The best performance was obtained when the data was trained and tested by ' additive-multiplicative models. ' The correlation for the vowel duration prediction was 0.69 and the RMSE. 31.80 ms. while the correlation for the consonant duration prediction was 0.54 and the RMSE. 29.02 ms. The results were not good enough to be applied to the real-time text-to-speech systems. Further investigation of feature interactions is required for the better performance of the Sums-of-Products models.

  • PDF

음성합성을 이용한 병적 음성의 치료 결과에 대한 예측 (Prediction of Post-Treatment Outcome of Pathologic Voice Using Voice Synthesis)

  • 이주환;최홍식;김영호;김한수;최현승;김광문
    • 대한후두음성언어의학회지
    • /
    • 제14권1호
    • /
    • pp.30-39
    • /
    • 2003
  • Background and Objectives : Patients with pathologic voice often concern about recovery of voice after surgery. In our investigation, we give controlled values of three parameters of voice synthesis program of Dr. Speech Science. such as jitter, shimmer, and NNE(normalized noise energy) which characterize someone's voice from others and deviced a method to synthesize the predicted voice after performing operation. Subjects and Method : Values of vocal jitter, vocal shimmer, and glottal noise were measured with voices of 10 vocal cord Paralysis and 10 vocal Polyp Patients 1 week Prior to and 1 month after the surgery. With Dr. Speech science voice synthesis program we synthesized 'ae' vowel which is closely identical to preoperative and post-operative voice of the patients by controlling the values of jitter, shimmer, and glottal noise. then we analyzed the synthesized voices and compared with pre and post-operative voice. Results : 1) After inputting the preoperative and corrected values of jitter, shimmer, and glottal noise into the voice synthesis Program, voices identical to vocal Polyp Patients' Pre- and Postoperative voices withiin statistical significance were synthesized 2) After elimination of synergistic effects between three paramenter, we were able to synthesize voice identical to vocal paralysis patients' preoperative voices. 3) After inputting only slightly increased jitter, shimmer into the synthesis program, we were able to synthesize voice identical to vocal cord paralysis patients' postoperative voices. Conclusion : Voices synthesized with Dr. Speech science program were identical to patients' actual pre and postoperative voice, and clinicians will be able to give the patients more information and thus increased patients cooperability can be expected.

  • PDF