• Title/Summary/Keyword: 음소

Search Result 529, Processing Time 0.023 seconds

UA Tree-based Reduction of Speech DB in a Large Corpus-based Korean TTS (대용량 한국어 TTS의 결정트리기반 음성 DB 감축 방안)

  • Lee, Jung-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.7
    • /
    • pp.91-98
    • /
    • 2010
  • Large corpus-based concatenating Text-to-Speech (TTS) systems can generate natural synthetic speech without additional signal processing. Because the improvements in the natualness, personality, speaking style, emotions of synthetic speech need the increase of the size of speech DB, it is necessary to prune the redundant speech segments in a large speech segment DB. In this paper, we propose a new method to construct a segmental speech DB for the Korean TTS system based on a clustering algorithm to downsize the segmental speech DB. For the performance test, the synthetic speech was generated using the Korean TTS system which consists of the language processing module, prosody processing module, segment selection module, speech concatenation module, and segmental speech DB. And MOS test was executed with the a set of synthetic speech generated with 4 different segmental speech DBs. We constructed 4 different segmental speech DB by combining CM1(or CM2) tree clustering method and full DB (or reduced DB). Experimental results show that the proposed method can reduce the size of speech DB by 23% and get high MOS in the perception test. Therefore the proposed method can be applied to make a small sized TTS.

Conformer with lexicon transducer for Korean end-to-end speech recognition (Lexicon transducer를 적용한 conformer 기반 한국어 end-to-end 음성인식)

  • Son, Hyunsoo;Park, Hosung;Kim, Gyujin;Cho, Eunsoo;Kim, Ji-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.530-536
    • /
    • 2021
  • Recently, due to the development of deep learning, end-to-end speech recognition, which directly maps graphemes to speech signals, shows good performance. Especially, among the end-to-end models, conformer shows the best performance. However end-to-end models only focuses on the probability of which grapheme will appear at the time. The decoding process uses a greedy search or beam search. This decoding method is easily affected by the final probability output by the model. In addition, the end-to-end models cannot use external pronunciation and language information due to structual problem. Therefore, in this paper conformer with lexicon transducer is proposed. We compare phoneme-based model with lexicon transducer and grapheme-based model with beam search. Test set is consist of words that do not appear in training data. The grapheme-based conformer with beam search shows 3.8 % of CER. The phoneme-based conformer with lexicon transducer shows 3.4 % of CER.

Automatic severity classification of dysarthria using voice quality, prosody, and pronunciation features (음질, 운율, 발음 특징을 이용한 마비말장애 중증도 자동 분류)

  • Yeo, Eun Jung;Kim, Sunhee;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.57-66
    • /
    • 2021
  • This study focuses on the issue of automatic severity classification of dysarthric speakers based on speech intelligibility. Speech intelligibility is a complex measure that is affected by the features of multiple speech dimensions. However, most previous studies are restricted to using features from a single speech dimension. To effectively capture the characteristics of the speech disorder, we extracted features of multiple speech dimensions: voice quality, prosody, and pronunciation. Voice quality consists of jitter, shimmer, Harmonic to Noise Ratio (HNR), number of voice breaks, and degree of voice breaks. Prosody includes speech rate (total duration, speech duration, speaking rate, articulation rate), pitch (F0 mean/std/min/max/med/25quartile/75 quartile), and rhythm (%V, deltas, Varcos, rPVIs, nPVIs). Pronunciation contains Percentage of Correct Phonemes (Percentage of Correct Consonants/Vowels/Total phonemes) and degree of vowel distortion (Vowel Space Area, Formant Centralized Ratio, Vowel Articulatory Index, F2-Ratio). Experiments were conducted using various feature combinations. The experimental results indicate that using features from all three speech dimensions gives the best result, with a 80.15 F1-score, compared to using features from just one or two speech dimensions. The result implies voice quality, prosody, and pronunciation features should all be considered in automatic severity classification of dysarthria.

The perception and production of Korean vowels by Egyptian learners (이집트인 학습자의 한국어 모음 지각과 산출)

  • Benjamin, Sarah;Lee, Ho-Young
    • Phonetics and Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.23-34
    • /
    • 2021
  • This study aims to discuss how Egyptian learners of Korean perceive and categorize Korean vowels, how Koreans perceive Korean vowels they pronounce, and how Egyptian learners' Korean vowel categorization affects their perception and production of Korean vowels. In Experiment 1, 53 Egyptian learners were asked to listen to Korean test words pronounced by Koreans and choose the words they had listened to among 4 confusable words. In Experiment 2, 117 sound files (13 test words×9 Egyptian learners) recorded by Egyptian learners were given to Koreans and asked to select the words they had heard among 4 confusable words. The results of the experiments show that "new" Korean vowels that do not have categorizable ones in Egyptian Arabic easily formed new categories and were therefore well identified in perception and relatively well pronounced, but some of them were poorly produced. However, Egyptian learners poorly distinguished "similar" Korean vowels in perception, but their pronunciation was relatively well identified by native Koreans. Based on the results of this study, we argued that the Speech Learning Model (SLM) and Perceptual Assimilation Model (PAM) explain the L2 speech perception well, but they are insufficient to explain L2 speech production and therefore need to be revised and extended to L2 speech production.

A Performance Improvement Method using Variable Break in Corpus Based Japanese Text-to-Speech System (가변 Break를 이용한 코퍼스 기반 일본어 음성 합성기의 성능 향상 방법)

  • Na, Deok-Su;Min, So-Yeon;Lee, Jong-Seok;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2
    • /
    • pp.155-163
    • /
    • 2009
  • In text-to-speech systems, the conversion of text into prosodic parameters is necessarily composed of three steps. These are the placement of prosodic boundaries. the determination of segmental durations, and the specification of fundamental frequency contours. Prosodic boundaries. as the most important and basic parameter. affect the estimation of durations and fundamental frequency. Break prediction is an important step in text-to-speech systems as break indices (BIs) have a great influence on how to correctly represent prosodic phrase boundaries, However. an accurate prediction is difficult since BIs are often chosen according to the meaning of a sentence or the reading style of the speaker. In Japanese, the prediction of an accentual phrase boundary (APB) and major phrase boundary (MPB) is particularly difficult. Thus, this paper presents a method to complement the prediction errors of an APB and MPB. First, we define a subtle BI in which it is difficult to decide between an APB and MPB clearly as a variable break (VB), and an explicit BI as a fixed break (FB). The VB is chosen using the classification and regression tree, and multiple prosodic targets in relation to the pith and duration are then generated. Finally. unit-selection is conducted using multiple prosodic targets. In the MOS test result. the original speech scored a 4,99. while proposed method scored a 4.25 and conventional method scored a 4.01. The experimental results show that the proposed method improves the naturalness of synthesized speech.

A DB Pruning Method in a Large Corpus-Based TTS with Multiple Candidate Speech Segments (대용량 복수후보 TTS 방식에서 합성용 DB의 감량 방법)

  • Lee, Jung-Chul;Kang, Tae-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.572-577
    • /
    • 2009
  • Large corpus-based concatenating Text-to-Speech (TTS) systems can generate natural synthetic speech without additional signal processing. To prune the redundant speech segments in a large speech segment DB, we can utilize a decision-tree based triphone clustering algorithm widely used in speech recognition area. But, the conventional methods have problems in representing the acoustic transitional characteristics of the phones and in applying context questions with hierarchic priority. In this paper, we propose a new clustering algorithm to downsize the speech DB. Firstly, three 13th order MFCC vectors from first, medial, and final frame of a phone are combined into a 39 dimensional vector to represent the transitional characteristics of a phone. And then the hierarchically grouped three question sets are used to construct the triphone trees. For the performance test, we used DTW algorithm to calculate the acoustic similarity between the target triphone and the triphone from the tree search result. Experimental results show that the proposed method can reduce the size of speech DB by 23% and select better phones with higher acoustic similarity. Therefore the proposed method can be applied to make a small sized TTS.

Development of Text-to-Speech System for PC (PC용 Text-to-Speech 시스템 개발)

  • Choi Muyeol;Hwang Cholgyu;Kim Soontae;Kim Junggon;Yi Sopae;Jang Seokbok;Pyo Kyungnan;Ahn Hyesun;Kim Hyung Soon
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.41-44
    • /
    • 1999
  • 본 논문에서는 PC 응용을 위한 고음질의 한국어 text-to-speech(TTS) 합성 시스템을 개발하였다. 개발된 시스템의 합성방식으로는 음의 고저 조절, 인접음 사이의 연결 처리 및 음색제어 등에서 기존의 PSOLA 방식에 비해 장점을 가지는 정현파 모델 기반의 방식을 채택하였고, 자연스러운 운율 모델링을 위하여 통계적 기법중의 하나인 Classification and regression tree(CART) 방법을 사용하였다. 또한 음소 경계의 불연속성 문제를 줄이기 위한 합성단위로 초성-중성 및 종성 단위를 사용하였고, 다양한 음색표현이 가능하도록 음색제어 기능을 갖추었다. 그리고, 표준 Speech Application Program Interface(SAPI)를 준용한 TTS engine 형태로 구현함으로써 PC 상에서의 응용 프로그램 개발 편의성을 높였다. 합성음의 청취평가 결과 음질의 우수성 및 음색제어 기능의 유효성을 확인할 수 있었다.

  • PDF

Performance Improvement of Vocabulary Independent Speech Recognizer using Back-Off Method on Subword Model (음소 모델의 Back-Off 기법을 이용한 어휘독립 음성인식기의 성능개선)

  • Koo Dong-Ook;choi Joon Ju;Oh Yung-Hwan
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.19-22
    • /
    • 2000
  • 어휘독립 음성인식이란 음향학적 모델 훈련에 사용하지 않은 어휘들을 인식하는 것이다. 단어모델을 이용한 어휘독립 음성인식 시스템은 발음표기로 변환된 인식대상어휘에 대하여 문맥 종속형 부단어(context dependent subword) 단위로 훈련된 모델을 연결하여 단어 모델을 만들고 이 단어 모델로 인식을 수행한다. 이러한 시스템의 경우 훈련과정에서 나타나지 않는 문맥 종속형 부단어가 인식대상어휘에서 나타나게 되고, 따라서 정확한 단어모델을 구성할 수 없다는 문제점이 있다 본 논문에서는 문맥 종속형 부단어 구분의 계층화를 통한 back-off 선택 방법을 이용하여 새롭게 나타난 문맥 종속형 부단어 대신 연결될 부단어 모델을 찾아내는 방법을 제안한다 제안된 선택 방법은 새롭게 나타난 문맥 종속형 부단어를 포함하는 상위의 부단어를 찾아내는 방법이다. 실험 결과 10단어 세트에서 $97.5\%$ 50단어 세트에서$90.16\%$ 100 단어 세트에서 $82.08\%$의 인식률을 얻었다.

  • PDF

초등학교에서의 영어 발음 및 청취 교육

  • 정인교
    • Proceedings of the KSPS conference
    • /
    • 1997.07a
    • /
    • pp.248-248
    • /
    • 1997
  • 오늘날 영어교육은 교과과정령에 엄연히 명시된 네 가지 기능(four skills) 즉 듣기, 말하기, 원기, 쓰기라는 정당하고도 보편 타당성 있는 명분 하에 어떻게 가르쳐 왔는가 를 반문해 보면 많은 아쉬움이 남는다. 그간 6년간의 중등과정, 심지어는 대학에서 환 두해까지 영어를 이수한 사람틀 중에는 문자를 통해서는 상당한 수준, 그것도 영어 토박이들조차 놀랄 정도의 영어를 이해하지만, 소리를 통해 들을 때는 ---말하는 것은 두말 할 것도 없고---아주 간단한 내용의 영어조차 알아듣기 힘든 경험을 한 사람이 많다는 것은 부인할 수 없는 사실이다. 그 이유는 명백하다. 즉, 문자를 대할 때는 시각적 자극의 형태가 두뇌 속에 저장된 정보---가공할 문법적 지식---와 일치하기 때문에 쉽게 이해를 할 수 있는 반면, 소리를 들을 때는 청각적 자극의 형태가 두뇌 속에 저장된 정보---극히 불완전한 발음사전, 또는 모국어의 음운체계에 의한 영어발음--- 와 차이가 있기 때문일 것이다. 그러므로 적어도 말소리를 매체로 하는 의사소통에 있어서는 영어의 본토박이 발음을 정확히, 아니면 적어도 매우 근접하게 나마 터득하여(습관화하여)두뇌에 저장하는 일이 가장 중요한 일이다. 따라서 영어교사는 모국어의 음운체계에 대한 정확하고도 상세한 지식을 토대로 하여 영어의 음운체계와 '언어학적으로 의미 있는 (linguistically significant)' 대초분석의 방법으로 발음을 지도한다면 보다 나은 학습효과를 기대할 수 있을 것이다. 일반적으로 모국어의 발음이 외국어의 발음에 간섭을 유발하는 경우는 다음과 같다. 1. 분절음체계가 서로 다를 때 2. 한 언어의 음소가 다른 언어의 이음(allophone)일 때 3. 유사한 음의 조음장소와 방법 이 다를 때 4. 분절음의 분포 또는 배열이 다를 때 5. 음운현상이 다를 때 6. 언어의 리듬이 다를 때 위의 여섯 가지 경우를 중심으로 영어와 한국어의 발음특성을 대조하여 '낯선 말투(foreign accent)' 또는 발음오류를 최소로 줄이는 것이 영어교사의 일차적인 목표이다.

  • PDF

A study on extraction of the frames representing each phoneme in continuous speech (연속음에서의 각 음소의 대표구간 추출에 관한 연구)

  • 박찬응;이쾌희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.4
    • /
    • pp.174-182
    • /
    • 1996
  • In continuous speech recognition system, it is possible to implement the system which can handle unlimited number of words by using limited number of phonetic units such as phonemes. Dividing continuous speech into the string of tems of phonemes prior to recognition process can lower the complexity of the system. But because of the coarticulations between neiboring phonemes, it is very difficult ot extract exactly their boundaries. In this paper, we propose the algorithm ot extract short terms which can represent each phonemes instead of extracting their boundaries. The short terms of lower spectral change and higher spectral chang eare detcted. Then phoneme changes are detected using distance measure with this lower spectral change terms, and hgher spectral change terms are regarded as transition terms or short phoneme terms. Finally lower spectral change terms and the mid-term of higher spectral change terms are regarded s the represent each phonemes. The cepstral coefficients and weighted cepstral distance are used for speech feature and measuring the distance because of less computational complexity, and the speech data used in this experimetn was recoreded at silent and ordinary in-dorr environment. Through the experimental results, the proposed algorithm showed higher performance with less computational complexity comparing with the conventional segmetnation algorithms and it can be applied usefully in phoneme-based continuous speech recognition.

  • PDF