• Title/Summary/Keyword: 자소-음소 대응규칙

Search Result 3, Processing Time 0.015 seconds

The Development of Grapheme-Phoneme Correspondence Rules and Kulja Reading in Korean-Chinese Children (중국 조선족 아동의 한글 자소-음소 대응능력의 발달과 글자읽기와의 관계에 관한 연구)

  • Yoon, Hyekyung;Park, Hyewon
    • Korean Journal of Child Studies
    • /
    • v.26 no.4
    • /
    • pp.145-155
    • /
    • 2005
  • This study was carried out to reveal Hangul acquisition processes in Korean-Chinese children who grow in a horizontal bilingual environment. In this experiment Grapheme substitution/deletion tasks and sensible/non-sensible Kulja reading tasks were administered to 3-, 4-, 5- and 6-year-old Korean-Chinese children growing up in a bilingual environment. Results were that Korean-Chinese children showed similar patterns of Hangul acquisition processes to Korean children but acquired grapheme-phoneme(G-P) correspondence earlier than Korean children. Hangul acquisition rates were 41.7%, 45.7%, 53% and 92.7% at age 3, 4, 5 and 6, respectively. Both Korean-Chinese and Korean children showed higher sensitivity for the final consonant than for the initial and middle consonants. Correlation between phoneme perception and reading was only significant among 6-year-olds in non-sensible Kulja reading tasks. Training in transforming ideographic Chinese to a phonetic system could effect early acquisition of G-P correspondence in Korean-Chinese children.

  • PDF

Improvements of an English Pronunciation Dictionary Generator Using DP-based Lexicon Pre-processing and Context-dependent Grapheme-to-phoneme MLP (DP 알고리즘에 의한 발음사전 전처리와 문맥종속 자소별 MLP를 이용한 영어 발음사전 생성기의 개선)

  • 김회린;문광식;이영직;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.21-27
    • /
    • 1999
  • In this paper, we propose an improved MLP-based English pronunciation dictionary generator to apply to the variable vocabulary word recognizer. The variable vocabulary word recognizer can process any words specified in Korean word lexicon dynamically determined according to the current recognition task. To extend the ability of the system to task for English words, it is necessary to build a pronunciation dictionary generator to be able to process words not included in a predefined lexicon, such as proper nouns. In order to build the English pronunciation dictionary generator, we use context-dependent grapheme-to-phoneme multi-layer perceptron(MLP) architecture for each grapheme. To train each MLP, it is necessary to obtain grapheme-to-phoneme training data from general pronunciation dictionary. To automate the process, we use dynamic programming(DP) algorithm with some distance metrics. For training and testing the grapheme-to-phoneme MLPs, we use general English pronunciation dictionary with about 110 thousand words. With 26 MLPs each having 30 to 50 hidden nodes and the exception grapheme lexicon, we obtained the word accuracy of 72.8% for the 110 thousand words superior to rule-based method showing the word accuracy of 24.0%.

  • PDF

Conformer with lexicon transducer for Korean end-to-end speech recognition (Lexicon transducer를 적용한 conformer 기반 한국어 end-to-end 음성인식)

  • Son, Hyunsoo;Park, Hosung;Kim, Gyujin;Cho, Eunsoo;Kim, Ji-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.530-536
    • /
    • 2021
  • Recently, due to the development of deep learning, end-to-end speech recognition, which directly maps graphemes to speech signals, shows good performance. Especially, among the end-to-end models, conformer shows the best performance. However end-to-end models only focuses on the probability of which grapheme will appear at the time. The decoding process uses a greedy search or beam search. This decoding method is easily affected by the final probability output by the model. In addition, the end-to-end models cannot use external pronunciation and language information due to structual problem. Therefore, in this paper conformer with lexicon transducer is proposed. We compare phoneme-based model with lexicon transducer and grapheme-based model with beam search. Test set is consist of words that do not appear in training data. The grapheme-based conformer with beam search shows 3.8 % of CER. The phoneme-based conformer with lexicon transducer shows 3.4 % of CER.