• Title/Summary/Keyword: 강제 음성 정렬

Search Result 3, Processing Time 0.018 seconds

CRNN-Based Korean Phoneme Recognition Model with CTC Algorithm (CTC를 적용한 CRNN 기반 한국어 음소인식 모델 연구)

  • Hong, Yoonseok;Ki, Kyungseo;Gweon, Gahgene
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.3
    • /
    • pp.115-122
    • /
    • 2019
  • For Korean phoneme recognition, Hidden Markov-Gaussian Mixture model(HMM-GMM) or hybrid models which combine artificial neural network with HMM have been mainly used. However, current approach has limitations in that such models require force-aligned corpus training data that is manually annotated by experts. Recently, researchers used neural network based phoneme recognition model which combines recurrent neural network(RNN)-based structure with connectionist temporal classification(CTC) algorithm to overcome the problem of obtaining manually annotated training data. Yet, in terms of implementation, these RNN-based models have another difficulty in that the amount of data gets larger as the structure gets more sophisticated. This problem of large data size is particularly problematic in the Korean language, which lacks refined corpora. In this study, we introduce CTC algorithm that does not require force-alignment to create a Korean phoneme recognition model. Specifically, the phoneme recognition model is based on convolutional neural network(CNN) which requires relatively small amount of data and can be trained faster when compared to RNN based models. We present the results from two different experiments and a resulting best performing phoneme recognition model which distinguishes 49 Korean phonemes. The best performing phoneme recognition model combines CNN with 3hop Bidirectional LSTM with the final Phoneme Error Rate(PER) at 3.26. The PER is a considerable improvement compared to existing Korean phoneme recognition models that report PER ranging from 10 to 12.

Patterns of categorical perception and response times in the matrix scope interpretation of embedded wh-phrases in Gyeongsang Korean (경상 방언 내포문 의문사의 작용역 범주 지각 양상과 반응 속도 연구)

  • Weonhee Yun
    • Phonetics and Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.1-11
    • /
    • 2023
  • This study investigated the response time and patterns of categorical perception of the wh-scope of an embedded clause with the non-bridge verb, "gung-geum hada 'wonder'," in the matrix verb phrase in Gyeongsang Korean. Using the same procedure as Yun (2022), 72 responses and response times for each stimulus were collected from 24 participants over the course of three trials. The stimuli were recorded readings of 40 speakers (20 male, 20 female). Context was provided to induce a matrix scope interpretation of the embedded wh-phrase in the target sentence. We sorted the 40 stimuli according to the number of matrix scope responses each received, and charted the response times for each stimulus. Although there was considerable overlap for the different types of wh-scope interpretations, there was a clear difference in categorical perception between the matrix and embedded scopes. The 24 participants also differed in their categorical perceptions. The results suggested that response time and wh-scope interpretation were not directly related and that two main weighted factors affected wh-scope interpretation: morpho-syntactic constraints and prosodic structural integrity. The weighting of each of these factors was inversely correlated and varied among subjects.

Speech Animation Synthesis based on a Korean Co-articulation Model (한국어 동시조음 모델에 기반한 스피치 애니메이션 생성)

  • Jang, Minjung;Jung, Sunjin;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.49-59
    • /
    • 2020
  • In this paper, we propose a speech animation synthesis specialized in Korean through a rule-based co-articulation model. Speech animation has been widely used in the cultural industry, such as movies, animations, and games that require natural and realistic motion. Because the technique for audio driven speech animation has been mainly developed for English, however, the animation results for domestic content are often visually very unnatural. For example, dubbing of a voice actor is played with no mouth motion at all or with an unsynchronized looping of simple mouth shapes at best. Although there are language-independent speech animation models, which are not specialized in Korean, they are yet to ensure the quality to be utilized in a domestic content production. Therefore, we propose a natural speech animation synthesis method that reflects the linguistic characteristics of Korean driven by an input audio and text. Reflecting the features that vowels mostly determine the mouth shape in Korean, a coarticulation model separating lips and the tongue has been defined to solve the previous problem of lip distortion and occasional missing of some phoneme characteristics. Our model also reflects the differences in prosodic features for improved dynamics in speech animation. Through user studies, we verify that the proposed model can synthesize natural speech animation.