• Title/Summary/Keyword: Korean Eojeol

Search Result 55, Processing Time 0.024 seconds

Analysis of the Directives and Wh-words in the Directives of Elementary Korean Textbooks (초등 국어교과서 지시문과 의문사 분석)

  • Lee, Suhyang
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.3
    • /
    • pp.134-140
    • /
    • 2022
  • The purpose of this study was to investigate the directives and Wh-words in the directives from elementary 2nd, 4th and 6th grade Korean textbooks. After entering all directives into Microsoft Office Excel, directives with Wh-words were separated. The analysis program, Natmal, was used for the analysis of the directives and Wh-words. The criteria from previous studies were also applied for this analysis process. As a result of the study, there are a lot of nouns and verbs in directives. They were consisted of sentences with an average of 6.9 Eojeol. There were a total of 11 types of Wh-words and 'Mueot(what), Eotteon(which), eotteohge(how)' appeared most frequently in all grades. For question types, both grades had more inferential questions than literal information questions. This results were expected to be used as basic data for language interventions with school aged children who have language disorders.

Two Statistical Models for Automatic Word Spacing of Korean Sentences (한글 문장의 자동 띄어쓰기를 위한 두 가지 통계적 모델)

  • 이도길;이상주;임희석;임해창
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.358-371
    • /
    • 2003
  • Automatic word spacing is a process of deciding correct boundaries between words in a sentence including spacing errors. It is very important to increase the readability and to communicate the accurate meaning of text to the reader. The previous statistical approaches for automatic word spacing do not consider the previous spacing state, and thus can not help estimating inaccurate probabilities. In this paper, we propose two statistical word spacing models which can solve the problem of the previous statistical approaches. The proposed models are based on the observation that the automatic word spacing is regarded as a classification problem such as the POS tagging. The models can consider broader context and estimate more accurate probabilities by generalizing hidden Markov models. We have experimented the proposed models under a wide range of experimental conditions in order to compare them with the current state of the art, and also provided detailed error analysis of our models. The experimental results show that the proposed models have a syllable-unit accuracy of 98.33% and Eojeol-unit precision of 93.06% by the evaluation method considering compound nouns.

Rule-based Speech Recognition Error Correction for Mobile Environment (모바일 환경을 고려한 규칙기반 음성인식 오류교정)

  • Kim, Jin-Hyung;Park, So-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.10
    • /
    • pp.25-33
    • /
    • 2012
  • In this paper, we propose a rule-based model to correct errors in a speech recognition result in the mobile device environment. The proposed model considers the mobile device environment with limited resources such as processing time and memory, as follows. In order to minimize the error correction processing time, the proposed model removes some processing steps such as morphological analysis and the composition and decomposition of syllable. Also, the proposed model utilizes the longest match rule selection method to generate one error correction candidate per point, assumed that an error occurs. For the purpose of deploying memory resource, the proposed model uses neither the Eojeol dictionary nor the morphological analyzer, and stores a combined rule list without any classification. Considering the modification and maintenance of the proposed model, the error correction rules are automatically extracted from a training corpus. Experimental results show that the proposed model improves 5.27% on the precision and 5.60% on the recall based on Eojoel unit for the speech recognition result.

Lexico-semantic interactions during the visual and spoken recognition of homonymous Korean Eojeols (한국어 시·청각 동음동철이의 어절 재인에 나타나는 어휘-의미 상호작용)

  • Kim, Joonwoo;Kang, Kathleen Gwi-Young;Yoo, Doyoung;Jeon, Inseo;Kim, Hyun Kyung;Nam, Hyeomin;Shin, Jiyoung;Nam, Kichun
    • Phonetics and Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.1-15
    • /
    • 2021
  • The present study investigated the mental representation and processing of an ambiguous word in the bimodal processing system by manipulating the lexical ambiguity of a visually or auditorily presented word. Homonyms (e.g., '물었다') with more than two meanings and control words (e.g., '고통을') with a single meaning were used in the experiments. The lemma frequency of words was manipulated while the relative frequency of multiple meanings of each homonym was balanced. In both experiments using the lexical decision task, a robust frequency effect and a critical interaction of word type by frequency were found. In Experiment 1, spoken homonyms yielded faster latencies relative to control words (i.e., ambiguity advantage) in the low frequency condition, while ambiguity disadvantage was found in the high frequency condition. A similar interactive pattern was found in visually presented homonyms in the subsequent Experiment 2. Taken together, the first key finding is that interdependent lexico-semantic processing can be found both in the visual and auditory processing system, which in turn suggests that semantic processing is not modality dependent, but rather takes place on the basis of general lexical knowledge. The second is that multiple semantic candidates provide facilitative feedback only when the lemma frequency of the word is relatively low.

A Comparative study on the Effectiveness of Segmentation Strategies for Korean Word and Sentence Classification tasks (한국어 단어 및 문장 분류 태스크를 위한 분절 전략의 효과성 연구)

  • Kim, Jin-Sung;Kim, Gyeong-min;Son, Jun-young;Park, Jeongbae;Lim, Heui-seok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.39-47
    • /
    • 2021
  • The construction of high-quality input features through effective segmentation is essential for increasing the sentence comprehension of a language model. Improving the quality of them directly affects the performance of the downstream task. This paper comparatively studies the segmentation that effectively reflects the linguistic characteristics of Korean regarding word and sentence classification. The segmentation types are defined in four categories: eojeol, morpheme, syllable and subchar, and pre-training is carried out using the RoBERTa model structure. By dividing tasks into a sentence group and a word group, we analyze the tendency within a group and the difference between the groups. By the model with subchar-level segmentation showing higher performance than other strategies by maximal NSMC: +0.62%, KorNLI: +2.38%, KorSTS: +2.41% in sentence classification, and the model with syllable-level showing higher performance at maximum NER: +0.7%, SRL: +0.61% in word classification, the experimental results confirm the effectiveness of those schemes.