• 제목/요약/키워드: Speech Synthesis

검색결과 381건 처리시간 0.028초

한국어 이중모음의 음향학적 연구 (An Acoustical Study of Korean Diphthongs)

  • 양병곤
    • 대한음성학회지:말소리
    • /
    • 제25_26호
    • /
    • pp.3-26
    • /
    • 1993
  • The goals of the present study were (3) to collect and analyze sets of fundamental frequency (F0) and formant frequency (F1, F2, F3) data of Korean diphthongs from ten linguistically homogeneous speakers of Korean males, and (2) to make a comparative study of Korean monophthongs and diphthongs. Various definitions, kinds, and previous studies of diphthongs were examined in the introduction. Procedures for screening subjects to form a linguistically homogeneous group, time point selection and formant determination were explained in the following section. The principal findings were as follows: 1. Much variation was observed in the ongliding part of diphthongs. 2. F2 values of (j) group descended while those of [w] group ascended, 3. The average duration of diphthongs were about 110 msec, and there was not much variation between speakers and diphthongs. 4. In a comparative study of monophthongs and diphthongs, Fl and F2 values of the same offgliding part at the third time point almost converged. 5. The gliding of diphthongs was very short beginning from the h-noise. Perceptual studies using speech synthesis are desirable to find major parameters for diphthongs. The results of the present study wi11 be useful in the area of automated speech recognition and computer synthesis of speech.

  • PDF

규칙합성음의 객관적 품질평가에 관한 연구 (A Study on Objective Quality Assessment for Synthesized speech by Rule)

  • 홍진우;김순협
    • 전자공학회논문지B
    • /
    • 제30B권10호
    • /
    • pp.42-49
    • /
    • 1993
  • In this paper, we evaluate the quality of synthesized speech by rule using the LPC CD as a objective measure, and then compare the test result with the subjective one. Speech used for the test consists of 108 words which are selected by word construction method using Korean attribute and frequency distribution, synthesized by demi-syllable rule. By evaluating the quality of synthesized speech by reule objectively, we have tried to resolve the problems such as lots of evaluation time, expansion of test scale, and variables of analysis result arised by subjective measure. We have, also, proved the validity of the objective test using the LPC CD, by comparing intelligibility which is the index for the subjective quality evaluation of synthesized speech by rule with MOS. From this results, we can provide a guide for quality assessment that would be useful in the R&D of synthesis method and the commercial products using synthesized speech.

  • PDF

청각 모델에 기초한 음성 특징 추출에 관한 연구 (A study on the speech feature extraction based on the hearing model)

  • 김바울;윤석현;홍광석;박병철
    • 전자공학회논문지B
    • /
    • 제33B권4호
    • /
    • pp.131-140
    • /
    • 1996
  • In this paper, we propose the method that extracts the speech feature using the hearing model through signal precessing techniques. The proposed method includes following procedure ; normalization of the short-time speech block by its maximum value, multi-resolution analysis using the discrete wavelet transformation and re-synthesize using thediscrete inverse wavelet transformation, differentiation after analysis and synthesis, full wave rectification and integration. In order to verify the performance of the proposed speech feature in the speech recognition task, korean digita recognition experiments were carried out using both the dTW and the VQ-HMM. The results showed that, in case of using dTW, the recognition rates were 99.79% and 90.33% for speaker-dependent and speaker-independent task respectively and, in case of using VQ-HMM, the rate were 96.5% and 81.5% respectively. And it indicates that the proposed speech feature has the potentials to use as a simple and efficient feature for recognition task.

  • PDF

음성인식과 얼굴인식을 사용한 사용자 환경의 상호작용 (User-customized Interaction using both Speech and Face Recognition)

  • 김성일
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2007년도 춘계학술대회 학술발표 논문집 제17권 제1호
    • /
    • pp.397-400
    • /
    • 2007
  • In this paper, we discuss the user-customized interaction for intelligent home environments. The interactive system is based upon the integrated techniques using both speech and face recognition. For essential modules, the speech recognition and synthesis were basically used for a virtual interaction between user and proposed system. In experiments, particularly, the real-time speech recognizer based on the HM-Net(Hidden Markov Network) was incorporated into the integrated system. Besides, the face identification was adopted to customize home environments for a specific user. In evaluation, the results showed that the proposed system was easy to use for intelligent home environments, even though the performance of the speech recognizer did not show a satisfactory results owing to the noisy environments.

  • PDF

반음절단위를 이용한 한국어 음성합성에 관한 연구 (A Study on the Korean Text-to-Speech Using Demisyllable Units)

  • 윤기선;박성한
    • 대한전자공학회논문지
    • /
    • 제27권10호
    • /
    • pp.138-145
    • /
    • 1990
  • 본 논문에서는 합성단위를 반음절로 하여 적은 데이터 베이스를 차지하면서도, 합성음의 자연스러움을 향상 시키기 위한 한국어 규칙 합성법을 제시한다. 반음절 음성신호를 분석하기 위해 12차 선형 예측법을 사용하며, 합성음의 자연성과 명료성을 위해 음절간 접속 규칙, 모음부의 연결규칙을 개발한다. 또한 신경망 모델을 이용한 음운 변동 규칙과 운율규칙을 적용한다.

  • PDF

The role of prosody in dialect authentication Simulating Masan dialect with Seoul speech segments

  • Yoon, Kyu-Chul
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2007년도 한국음성과학회 공동학술대회 발표논문집
    • /
    • pp.234-239
    • /
    • 2007
  • The purpose of this paper is to examine the viability of simulating one dialect with the speech segments of another dialect through prosody cloning. The hypothesis is that, among Korean regional dialects, it is not the segmental differences but the prosodic differences that play a major role in authentic dialect perception. This work intends to support the hypothesis by simulating Masan dialect with the speech segments from Seoul dialect. The dialect simulation was performed by transplanting the prosodic features of Masan utterances unto the same utterances produced by a Seoul speaker. Thus, the simulated Masan utterances were composed of Seoul speech segments but their prosody came from the original Masan utterances. The prosodic features involved were the fundamental frequency contour, the segmental durations, and the intensity contour. The simulated Masan utterances were evaluated by four native Masan speakers and the role of prosody in dialect authentication and speech synthesis was discussed.

  • PDF

Prediction of Prosodic Boundaries Using Dependency Relation

  • Kim, Yeon-Jun;Oh, Yung-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • 제18권4E호
    • /
    • pp.26-30
    • /
    • 1999
  • This paper introduces a prosodic phrasing method in Korean to improve the naturalness of speech synthesis, especially in text-to-speech conversion. In prosodic phrasing, it is necessary to understand the structure of a sentence through a language processing procedure, such as part-of-speech (POS) tagging and parsing, since syntactic structure correlates better with the prosodic structure of speech than with other factors. In this paper, the prosodic phrasing procedure is treated from two perspectives: dependency parsing and prosodic phrasing using dependency relations. This is appropriate for Ural-Altaic, since a prosodic boundary in speech usually concurs with a governor of dependency relation. From experimental results, using the proposed method achieved 12% improvement in prosody boundary prediction accuracy with a speech corpus consisting 300 sentences uttered by 3 speakers.

  • PDF

피치 검출을 위한 스펙트럼 평탄화 기법 (Flattening Techniques for Pitch Detection)

  • 김종국;조왕래;배명진
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(4)
    • /
    • pp.381-384
    • /
    • 2002
  • In speech signal processing, it Is very important to detect the pitch exactly in speech recognition, synthesis and analysis. but, it is very difficult to pitch detection from speech signal because of formant and transition amplitude affect. therefore, in this paper, we proposed a pitch detection using the spectrum flattening techniques. Spectrum flattening is to eliminate the formant and transition amplitude affect. In time domain, positive center clipping is process in order to emphasize pitch period with a glottal component of removed vocal tract characteristic. And rough formant envelope is computed through peak-fitting spectrum of original speech signal in frequency domain. As a results, well get the flattened harmonics waveform with the algebra difference between spectrum of original speech signal and smoothed formant envelope. After all, we obtain residual signal which is removed vocal tract element The performance was compared with LPC and Cepstrum, ACF 0wing to this algorithm, we have obtained the pitch information improved the accuracy of pitch detection and gross error rate is reduced in voice speech region and in transition region of changing the phoneme.

  • PDF

HMM기반 자동음소분할기의 음소분할 오류 유형 분석 (The Error Pattern Analysis of the HMM-Based Automatic Phoneme Segmentation)

  • 김민제;이정철;김종진
    • 한국음향학회지
    • /
    • 제25권5호
    • /
    • pp.213-221
    • /
    • 2006
  • 합성음의 음질을 향상시키기 위하여 분할된 corpora로부터 합성유닛을 선택하여 사용하는 연속음성합성에서 정확한 음소분할은 매우 중요하다. 일반적으로 음소분할은 사람에 의해 수행되지만 많은 작업량으로 인한 시간적 지연, 일관 성 유지 어려움 등 많은 문제가 발생한다. 이에 따라 음성인식에서 도입된 HMM 기반의 자동음소분할이 음성인식, 음성 합성에서 널리 사용되어지고 있지만 음성전문가의 수작업 결과와 비교할 때 HMM 기반 자동음소분할은 오류가 있고, 이는 합성음 품질의 열화의 주요 원인이 되고 있다. 본 논문에서는 HMM 기반의 자동음소분할기를 사용하여 나타난 자동음소분할 결과와 수작업에 의한 음소분할 결과를 비교하고 유형별로 분석함으로써 음성합성의 성능향상을 위해 개선해야 할 문제점들을 제시한다. 실험에서는 ETRI의 표준형 한국어 공통 음성 DB을 사용하였고, 오차의 범위가 20ms를 벗어난 경우를 분절 오류로 간주하였다. 실험 결과 여성화자의 경우 파열음 + 모음, 파찰음 + 모음, 모음 + 유음 음소쌍에서는 각각 약 99%, 99.5%, 99%의 높은 정확률을 보인 반면, 폐쇄음 + 비음, 폐쇄음 + 유음, 비음 + 유음 음소쌍에서는 44.89%, 50%, 55% 의 낮은 정확률을 보였으며, 남성화자에 대한 실험결과에서도 유사한 경향을 보였다.

새로운 스펙트럼 완만화에 의한 합성 음질 개선 (Improvement of Synthetic Speech Quality using a New Spectral Smoothing Technique)

  • 장효종;최형일
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제30권11호
    • /
    • pp.1037-1043
    • /
    • 2003
  • 본 논문에서는 단위음소로 다이폰을 사용하여 음성을 합성하는 방법에 관하여 기술한다. 음성 합성은 기본적으로 단위음소들의 연결을 통하여 이루어지는데, 이때 발생하는 가장 큰 문제점은 두 단위음소 사이의 연결부분에서 불연속이 발생하는 것이다. 이 문제를 해결하기 위하여 본 논문에서는 포만트 궤적뿐 아니라 스펙트럼의 분포특성과 인간의 청각적인 특성을 반영하여 스펙트럼을 완만화하는 방법을 제안한다. 즉, 제안하는 방법은 단위음소의 연결 구간에서 인간의 청각신경 특성을 고려하여 완만화의 양과 범위를 결정한 다음, 두 다이폰 경계의 스펙트럼 분포를 시간에 따라 가중치를 다르게 주어 스펙트럼 완만화를 수행한다. 이 방법은 불연속을 제거하며 완만화로 인하여 발생할 수 있는 음성의 왜곡을 최소화한다. 제안하는 방법의 성능을 평가하기 위하여 ETRI 음성 DB 샘플과 개인별로 자체 녹음한 총 20여개의 문장에서 추출한 약 500여 개의 다이폰에 대하여 실험을 수행하였다.