• 제목/요약/키워드: Speech Synthesis

검색결과 381건 처리시간 0.025초

A User friendly Remote Speech Input Unit in Spontaneous Speech Translation System

  • 이광석;김흥준;송진국;추연규
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2008년도 춘계종합학술대회 A
    • /
    • pp.784-788
    • /
    • 2008
  • In this research, we propose a remote speech input unit, a new method of user-friendly speech input in speech recognition system. We focused the user friendliness on hands-free and microphone independence in speech recognition applications. Our module adopts two algorithms, the automatic speech detection and speech enhancement based on the microphone array-based beamforming method. In the performance evaluation of speech detection, within-200msec accuracy with respect to the manually detected positions is about 97percent under the noise environments of 25dB of the SNR. The microphone array-based speech enhancement using the delay-and-sum beamforming algorithm shows about 6dB of maximum SNR gain over a single microphone and more than 12% of error reduction rate in speech recognition.

  • PDF

MPEG-4 TTS (Text-to-Speech)

  • 한민수
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1999년도 하계종합학술대회 논문집
    • /
    • pp.699-707
    • /
    • 1999
  • It cannot be argued that speech is the most natural interfacing tool between men and machines. In order to realize acceptable speech interfaces, highly advanced speech recognizers and synthesizers are inevitable. Text-to-Speech(TTS) technology has been attracting a lot of interest among speech engineers because of its own benefits. Namely, the possible application areas of talking computers, emergency alarming systems in speech, speech output devices fur speech-impaired, and so on. Hence, many researchers have made significant progresses in the speech synthesis techniques in the sense of their own languages and as a result, the quality of currently available speech synthesizers are believed to be acceptable to normal users. These are partly why the MPEG group had decided to include the TTS technology as one of its MPEG-4 functionalities. ETRI has made major contributions to the current MPEG-4 TTS among various MPEG-4 functionalities. They are; 1) use of original prosody for synthesized speech output, 2) trick mode functions fer general users without breaking synthesized speech prosody, 3) interoperability with Facial Animation(FA) tools, and 4) dubbing a moving/animated picture with lib-shape pattern information.

  • PDF

HMM 기반 TTS와 MusicXML을 이용한 노래음 합성 (Singing Voice Synthesis Using HMM Based TTS and MusicXML)

  • 칸 나지브 울라;이정철
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권5호
    • /
    • pp.53-63
    • /
    • 2015
  • 노래음 합성이란 주어진 가사와 악보를 이용하여 컴퓨터에서 노래음을 생성하는 것이다. 텍스트/음성 변환기에 널리 사용된 HMM 기반 음성합성기는 최근 노래음 합성에도 적용되고 있다. 그러나 기존의 구현방법에는 대용량의 노래음 데이터베이스 수집과 학습이 필요하여 구현에 어려움이 있다. 또한 기존의 상용 노래음 합성시스템은 피아노 롤 방식의 악보 표현방식을 사용하고 있어 일반인에게는 익숙하지 않으므로 읽기 쉬운 표준 악보형식의 사용자 인터페이스를 지원하여 노래 학습의 편의성을 향상시킬 필요가 있다. 이 문제를 해결하기 위하여 본 논문에서는 기존 낭독형 음성합성기의 HMM 모델을 이용하고 노래음에 적합한 피치값과 지속시간 제어방법을 적용하여 HMM 모델 파라미터 값을 변화시킴으로서 노래음을 생성하는 방법을 제안한다. 그리고 음표와 가사를 입력하기 위한 MusicXML 기반의 악보편집기를 전단으로, HMM 기반의 텍스트/음성 변환 합성기를 합성기 후단으로서 사용하여 노래음 합성시스템을 구현하는 방법을 제안한다. 본 논문에서 제안하는 방법을 이용하여 합성된 노래음을 평가하였으며 평가결과 활용 가능성을 확인하였다.

한국어 공통 음성 DB구축 및 오류 검증 (Common Speech Database Collection and Validation for Communications)

  • 이수종;김상훈;이영직
    • 대한음성학회지:말소리
    • /
    • 제46호
    • /
    • pp.145-157
    • /
    • 2003
  • In this paper, we'd like to briefly introduce Korean common speech database, which project has been started to construct a large scaled speech database since 2002. The project aims at supporting the R&D environment of the speech technology for industries. It encourages domestic speech industries and activates speech technology domestic market. In the first year, the resulting common speech database consists of 25 kinds of databases considering various recording conditions such as telephone, PC, VoIP etc. The speech database will be widely used for speech recognition, speech synthesis, and speaker identification. On the other hand, although the database was originally corrected by manual, still it retains unknown errors and human errors. So, in order to minimize the errors in the database, we tried to find the errors based on the recognition errors and classify several kinds of errors. To be more effective than typical recognition technique, we will develop the automatic error detection method. In the future, we will try to construct new databases reflecting the needs of companies and universities.

  • PDF

고조파 복원에 의한 CELP 음성 부호화기의 저대역 확장 (Low-band Extension of CELP Speech Coder by Recovery of Harmonics)

  • 박진수;최무열;김형순
    • 대한음성학회지:말소리
    • /
    • 제49호
    • /
    • pp.63-75
    • /
    • 2004
  • Most existing telephone speech transmitted in current public networks is band-limited to 0.3-3.4 kHz. Compared with wideband speech(0-8 kHz), the narrowband speech lacks low-band (0-0.3 kHz) and high-band(3.4-8 kHz) components of sound. As a result, the speech is characterized by the reduced intelligibility and a muffled quality, and degraded speaker identification. Bandwidth extension is a technique to provide wideband speech quality, which means reconstruction of low-band and high-band components without any additional transmitted information. Our new approach considers to exploit harmonic synthesis method for reconstruction of low-band speech over the CELP coded speech. A spectral distortion measurement and listening test are introduced to assess the proposed method, and the improvement of synthesized speech quality was verified.

  • PDF

가변 대역폭 필터를 이용한 음성신호의 AM-FM 성분 분리에 관한 연구 (Decomposition of Speech Signal into AM-FM Components Using Varialle Bandwidth Filter)

  • 송민;이희영
    • 음성과학
    • /
    • 제8권4호
    • /
    • pp.45-58
    • /
    • 2001
  • Modulated components of a speech signal are frequently used for speech coding, speech recognition, and speech synthesis. Time-frequency representation (TFR) reveals some information about instantaneous frequency, instantaneous bandwidth and boundary of each component of the considering speech signal. In many cases, the extraction of AM-FM components corresponding to instantaneous frequencies is difficult since the Fourier spectra of the components with time-varying instantaneous frequency are overlapped each other in Fourier frequency domain. In this paper, an efficient method decomposing speech signal into AM-FM components is proposed. A variable bandwidth filter is developed for the decomposition of speech signals with time-varying instantaneous frequencies. The variable bandwidth filter can extract AM-FM components of a speech signal whose TFRs are not overlapped in timefrequency domain. Also, amplitude and instantaneous frequency of the decomposed components are estimated by using Hilbert transform.

  • PDF

SPEECH SYNTHESIS IN THE TIME DOMAIN BY PITCH CONTROL USING LAGRANGE INTERPOLATION(TD-PCULI)

  • Kang, Chan-Hee;Shin, Yong-Jo;Kim, Yun-Seok-;Kang, Dae-Soo;Lee, Jong-Heon-;Kwon, Ki-Hyung;An, Jeong-Keun;Sea, Sung-Tae;Chin, Yong-Ohk
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1994년도 FIFTH WESTERN PACIFIC REGIONAL ACOUSTICS CONFERENCE SEOUL KOREA
    • /
    • pp.984-990
    • /
    • 1994
  • In this paper a new speech synthesis method in the time domain using mono-syllables is proposed. It is to overcome the degradation of the synthetic speech quality by the synthesis method in the frequency domain and to develop an algorithm in the time domain for the prosodic control. In particular when we use a method in a time domain with mono-syllable as a synthesis unit it will be the main issues which are to control th pitch period and to smooth the energy pattern. As a solution to the pitch control, a method using Lagrange interpolation is suggested. As a solution to the other problem, an algorithm which can control the amplitude envelop shape of mono-syllable is proposed. As the results of experiments it was possible to synthesize unlimited Korean speeches including the prosody control. Accoding to the MOS evaluation the quality and the naturality in them was improved to be a good level.

  • PDF

멀티미디어 환경을 위한 정서음성의 모델링 및 합성에 관한 연구 (Modelling and Synthesis of Emotional Speech on Multimedia Environment)

  • 조철우;김대현
    • 음성과학
    • /
    • 제5권1호
    • /
    • pp.35-47
    • /
    • 1999
  • This paper describes procedures to model and synthesize emotional speech in a multimedia environment. At first, procedures to model the visual representation of emotional speech are proposed. To display the sequences of the images in synchronized form with speech, MSF(Multimedia Speech File) format is proposed and the display software is implemented. Then the emotional speech sinal is collected and analysed to obtain the prosodic characteristics of the emotional speech in limited domain. Multi-emotional sentences are spoken by actors. From the emotional speech signals, prosodic structures are compared in terms of the pseudo-syntactic structure. Based on the analyzed result, neutral speech is transformed into a specific emotinal state by modifying the prosodic structures.

  • PDF

한국어 반음절단위 규칙합성의 개선을 위한 포만트천이의 변경규칙 (An Alteration Rule of Formant Transition for Improvement of Korean Demisyllable Based Synthesis by Rule)

  • 이기영;최창석
    • 한국음향학회지
    • /
    • 제15권4호
    • /
    • pp.98-104
    • /
    • 1996
  • 본 연구에서는 반음절단위 규칙합성에서 연속음성을 합성할 때 조음결합에 의한 천이구간이 없는 반음절의 연결로 접속되어 부자연스러운 합성음이 되는 것을 개선하기 위하여 연쇄모음의 천이구간을 보상하는 방법으로 포만트천이의 변경규칙을 제안하였다. 반음절 단위만으로는 포만트천이가 발생하는 부분을 채울 수 없기 때문에 반음절단위의 음성데이타와 모음의 반음절 단위의 정상부위로부터 세그멘트한 정상모음 42개를 추가하여 데이터베이스를 구축하였으며 포만트를 변경하는 방법으로 포만트합성에서의 공진회로를 이용하였다. 제안한 방법의 타당성을 확인하기 위하여 음성합성시 연쇄모음 부분에 포만트천이의 변경규칙을 적용하여 원음성 및 변경규칙을 적용하지 않은 반음절단위 음성합성방식에 의한 합성음성의 스펙트로그램과 비교하고 MOS 테스트를 실시한 결과 보다 자연스러운 합성음성을 얻을 수 있음을 확인하였다.

  • PDF

Algorithm for Concatenating Multiple Phonemic Units for Small Size Korean TTS Using RE-PSOLA Method

  • Bak, Il-Suh;Jo, Cheol-Woo
    • 음성과학
    • /
    • 제10권1호
    • /
    • pp.85-94
    • /
    • 2003
  • In this paper an algorithm to reduce the size of Text-to-Speech database is proposed. The algorithm is based on the characteristics of Korean phonemic units. From the initial database, a reduced phoneme unit set is induced by articulatory similarity of concatenating phonemes. Speech data is read by one female announcer for 1000 phonetically balanced sentences. All the recorded speech is then segmented by phoneticians. Total size of the original speech data is about 640 MB including laryngograph signal. To synthesize wave, RE-PSOLA (Residual-Excited Pitch Synchronous Overlap and Add Method) was used. The voice quality of synthesized speech was compared with original speech in terms of spectrographic informations and objective tests. The quality of the synthesized speech is not much degraded when the size of synthesis DB was reduced from 320 MB to 82 MB.

  • PDF