• Title/Summary/Keyword: 조음 합성

Search Result 8, Processing Time 0.027 seconds

Articulatory robotics (조음 로보틱스)

  • Nam, Hosung
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.1-7
    • /
    • 2021
  • Speech is a spatiotemporally coordinated structure of constriction actions at discrete articulators such as lips, tongue tip, tongue body, velum, and glottis. Like other human movements (e.g., reaching), each action as a linguistic task is completed by a synergy of involved basic elements (e.g., bone, muscle, neural system). This paper discusses how speech tasks are dynamically related to joints as one of the basic elements in terms of robotics of speech production. Further this introduction of robotics to speech sciences will hopefully deepen our understanding of how speech is produced and provide a solid foundation to developing a physical talking machine.

Speech Synthesis Based on CVC Speech Segments Extracted from Continuous Speech (연속 음성으로부터 추출한 CVC 음성세그먼트 기반의 음성합성)

  • 김재홍;조관선;이철희
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.7
    • /
    • pp.10-16
    • /
    • 1999
  • In this paper, we propose a concatenation-based speech synthesizer using CVC(consonant-vowel-consonant) speech segments extracted from an undesigned continuous speech corpus. Natural synthetic speech can be generated by a proper modelling of coarticulation effects between phonemes and the use of natural prosodic variations. In general, CVC synthesis unit shows smaller acoustic degradation of speech quality since concatenation points are located in the consonant region and it can properly model the coarticulation of vowels that are effected by surrounding consonants. In this paper, we analyze the characteristics and the number of required synthesis units of 4 types of speech synthesis methods that use CVC synthesis units. Furthermore, we compare the speech quality of the 4 types and propose a new synthesis method based on the most promising type in terms of speech quality and implementability. Then we implement the method using the speech corpus and synthesize various examples. The CVC speech segments that are not in the speech corpus are substituted by demonstrate speech segments. Experiments demonstrate that CVC speech segments extracted from about 100 Mbytes continuous speech corpus can produce high quality synthetic speech.

  • PDF

Improved Text-to-Speech Synthesis System Using Articulatory Synthesis and Concatenative Synthesis (조음 합성과 연결 합성 방식을 결합한 개선된 문서-음성 합성 시스템)

  • 이근희;김동주;홍광석
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.369-372
    • /
    • 2002
  • In this paper, we present an improved TTS synthesis system using articulatory synthesis and concatenative synthesis. In concatenative synthesis, segments of speech are excised from spoken utterances and connected to form the desired speech signal. We adopt LPC as a parameter, VQ to reduce the memory capacity, and TD-PSOLA to solve the naturalness problem.

  • PDF

An algorithm of the Non-uniform synthesis unit selection for concatenative speech synthesis system (연결형 합성시스템을 위한 문맥종속 단위 기반의 비정형 합성단위 추출 알고리즘)

  • 김영일
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06e
    • /
    • pp.273.2-277
    • /
    • 1998
  • 본 논문에서는 음소단위 비정형 연결합성 시, 접합점에서 포만트 불연속을 최소화할 수 있도록 이웃음소간 경계강도 예측모델과 합성단위 검색시 음소단위 최장일치 검색 알고리즘을 설계하였다. 합성단위 연결부에서 발생하는 신호왜곡을 최소화하기 위해 “_C_”환경에서 자음이 유성음화된 경우, “_V_”환경에서 모음이 무성음화된 경우, 그리고 유성음 사이의 포만트 주파수 차이에 대한 모델을 생성하여, 음소간의 조음강도가 약한 부분이 합성단위 경계로 설정되도록 하였다. 합성단위 경계가 결정되면 주어진 문장의 문맥정보만을 이용하여 코포스로부터 후보를 선택한다. 선택된 후보를 사이의 연결성을 측정하기 위하여 합성 경계를 기준으로 전, 후 음소에 대한 음성적 특성과 포만트 천이 특성을 고려하였다. 실험은 K-ToBI 레이블링된 200문장을 기반으로 하였으며, 코퍼스로부터 한 문장을 선택하여 이를 목적치 패턴으로 선정 한 후, 목적치 패턴과 후보사이의 단위비용과 후보들 간의 연결비용을 계산하여 최적의 합성단위열을 추출하는 방식으로 이루어졌다. 본 논문에서는 이러한 문맥종속 단위 기반의 합성단위 추출 알고리즘과 실험 결과에 대해 보고한다.

  • PDF

Coarticulation Model of Hangul Visual speedh for Lip Animation (입술 애니메이션을 위한 한글 발음의 동시조음 모델)

  • Gong, Gwang-Sik;Kim, Chang-Heon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.9
    • /
    • pp.1031-1041
    • /
    • 1999
  • 기존의 한글에 대한 입술 애니메이션 방법은 음소의 입모양을 몇 개의 입모양으로 정의하고 이들을 보간하여 입술을 애니메이션하였다. 하지만 발음하는 동안의 실제 입술 움직임은 선형함수나 단순한 비선형함수가 아니기 때문에 보간방법에 의해 중간 움직임을 생성하는 방법으로는 음소의 입술 움직임을 효과적으로 생성할 수 없다. 또 이 방법은 동시조음도 고려하지 않아 음소들간에 변화하는 입술 움직임도 표현할 수 없었다. 본 논문에서는 동시조음을 고려하여 한글을 자연스럽게 발음하는 입술 애니메이션 방법을 제안한다. 비디오 카메라로 발음하는 동안의 음소의 움직임들을 측정하고 입술 움직임 제어 파라미터들을 추출한다. 각각의 제어 파라미터들은 L fqvist의 스피치 생성 제스처 이론(speech production gesture theory)을 이용하여 실제 음소의 입술 움직임에 근사한 움직임인 지배함수(dominance function)들로 정의되고 입술 움직임을 애니메이션할 때 사용된다. 또, 각 지배함수들은 혼합함수(blending function)와 반음절에 의한 한글 합성 규칙을 사용하여 결합하고 동시조음이 적용된 한글을 발음하게 된다. 따라서 스피치 생성 제스처 이론을 이용하여 입술 움직임 모델을 구현한 방법은 기존의 보간에 의해 중간 움직임을 생성한 방법보다 실제 움직임에 근사한 움직임을 생성하고 동시조음도 고려한 움직임을 보여준다.Abstract The existing lip animation method of Hangul classifies the shape of lips with a few shapes and implements the lip animation with interpolating them. However it doesn't represent natural lip animation because the function of the real motion of lips, during articulation, isn't linear or simple non-linear function. It doesn't also represent the motion of lips varying among phonemes because it doesn't consider coarticulation. In this paper we present a new coarticulation model for the natural lip animation of Hangul. Using two video cameras, we film the speaker's lips and extract the lip control parameters. Each lip control parameter is defined as dominance function by using L fqvist's speech production gesture theory. This dominance function approximates to the real lip animation of a phoneme during articulation of one and is used when lip animation is implemented. Each dominance function combines into blending function by using Hangul composition rule based on demi-syllable. Then the lip animation of our coarticulation model represents natural motion of lips. Therefore our coarticulation model approximates to real lip motion rather than the existing model and represents the natural lip motion considered coarticulation.

An Alteration Rule of Formant Transition for Improvement of Korean Demisyllable Based Synthesis by Rule (한국어 반음절단위 규칙합성의 개선을 위한 포만트천이의 변경규칙)

  • Lee, Ki-Young;Choi, Chang-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.4
    • /
    • pp.98-104
    • /
    • 1996
  • This paper propose the alteraton rule to compensate a formant trasition of several connected vowels for improving an unnatural synthesized continuous speech which is concatenated by each demisyllable without coarticulated formant transition for use in dmisyllable based synthesis by rule. To fullfill each formant transition part, the database of 42 stationary vowels which are segmented from the stable part of each vowels is appended to the one of Korean demisyllables, and the resonance circuit used in formant synthesis is employed to change the formant frequency of speech signals. To evaluate the synthesied speech by this rule, we carried out the alteration rule for connected vowels of the synthesized speech based on demisyllable, and compare spectrogram and MOS tested scores with the original and the demisyllable based synthesized speech without this rule. The result shows that this proposed rule can synthesize the more natural speech.

  • PDF

Implementation of Continuous Utterance Using Buffer Rearrangement for Articula Synthesizer (조음 음성 합성기에서 버퍼 재정렬을 이용한 연속음 구현)

  • Lee, Hui-Sung;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2454-2456
    • /
    • 2002
  • Since articuratory synthesis models the human vocal organs as precise as possible, it is potentially the most desirable method to produce various words and languages. This paper proposes a new type of an articulatory synthesizer using Mermelstein vocal tract model and Kelly-Lochbaum digital filter. Previous researches have assumed that the length of the vocal tract or the number of its cross sections dose not vary while uttering. However, the continuous utterance can not be easily implemented under this assumption. The limitation is overcomed by "Buffer Rearrangement" for dynamic vocal tract in this paper.

  • PDF

Knowledge based Text to Facial Sequence Image System for Interaction of Lecturer and Learner in Cyber Universities (가상대학에서 교수자와 학습자간 상호작용을 위한 지식기반형 문자-얼굴동영상 변환 시스템)

  • Kim, Hyoung-Geun;Park, Chul-Ha
    • The KIPS Transactions:PartB
    • /
    • v.15B no.3
    • /
    • pp.179-188
    • /
    • 2008
  • In this paper, knowledge based text to facial sequence image system for interaction of lecturer and learner in cyber universities is studied. The system is defined by the synthesis of facial sequence image which is synchronized the lip according to the text information based on grammatical characteristic of hangul. For the implementation of the system, the transformation method that the text information is transformed into the phoneme code, the deformation rules of mouse shape which can be changed according to the code of phonemes, and the synthesis method of facial sequence image by using deformation rules of mouse shape are proposed. In the proposed method, all syllables of hangul are represented 10 principal mouse shape and 78 compound mouse shape according to the pronunciation characteristics of the basic consonants and vowels, and the characteristics of the articulation rules, respectively. To synthesize the real time facial sequence image able to realize the PC, the 88 mouth shape stored data base are used without the synthesis of mouse shape in each frame. To verify the validity of the proposed method the various synthesis of facial sequence image transformed from the text information is accomplished, and the system that can be applied the PC is implemented using the proposed method.