• Title/Summary/Keyword: Sejong Corpus

Search Result 53, Processing Time 0.019 seconds

Research on Meaning Constraints of '-Neulago' Using the Sejong Row Corpus (세종 말뭉치를 이용한 '-느라고' 의미 제약 연구)

  • Ahn, Sung-Min
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.477-480
    • /
    • 2020
  • '-느라고'의 후행절 부정 의미 제약에 대하여 연구자들마다 의견을 달리하고 있지만 대부분의 교육 교재는 다수의 연구 내용을 반영하여 '-느라고'의 후행절에 부정 의미 제약이 있다고 제시하고 있다. 하지만 교육 교재에서 후행절에 부정 의미의 제약이 없다는 연구 내용이 배제되어야 한다면 이를 위한 심층적인 논의가 필요하며 타당한 근거가 있음을 밝혀야 한다. 본 연구는 '-느라고'의 후행절 부정 의미 제약에 대해 상반된 주장이 나오게 된 것에 주목하여 실제로 '-느라고'가 후행절 부정 의미 제약을 갖는지, 만약 제약을 갖지 않는다면 그 이유는 무엇인지 밝히고자 하였다. 이를 위해 세종 문어 원시 말뭉치에서 '-느라고'의 문장 1,601개를 추출하고 혹시 있을지 모를 통시적 변화를 제거하기 위해 교육 교재들이 집필된 2000년대의 문장만 선별하여 후행절의 의미를 확인하였다. 그 결과 323개의 문장 중 98개 문장, 33.3%가 후행절에 부정적인 의미를 갖지 않고 있는 것을 확인되었다. 이는 '-느라고'가 단순히 후행절 부정 의미 제약을 갖는다고 할 수 있는 수치가 아니었다. 부정 제약의 범위를 파악하기 위해 문장의 의미를 살펴 목적의 의미를 갖는 '-느라고'와 이유의 의미를 갖는 '-느라고'로 분류하였다. 이렇게 분류한 '-느라고'의 후행절을 다시 분석한 결과 이유의 '-느라고'에서는 후행절 부정 제약이 실현되고 있었지만 목적의 '-느라고'에서는 부정 제약이 발견되지 않았다. 따라서 '-느라고'가 이유와 목적의 의미를 가지며 이유의 '-느라고'로 실현될 때에만 부정 의미 제약을 갖는다는 보다 심층적이고 구체적인 연구 결과 얻어 냈다.

  • PDF

An Analysis of Korean Dependency Relation by Homograph Disambiguation (동형이의어 분별에 의한 한국어 의존관계 분석)

  • Kim, Hong-Soon;Ock, Cheol-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.6
    • /
    • pp.219-230
    • /
    • 2014
  • An analysis of dependency relation is a job that determines the governor and the dependent between words in sentence. The dependency relation of predicate is established by patterns and selectional restriction of subcategorization of the predicate. This paper proposes a method of analysis of Korean dependency relation using homograph predicate disambiguated in morphology analysis phase. The disambiguated homograph predicates has each different pattern. Especially reusing a stage transition training dictionary used during tagging POS and homograph, we propose a method of fixing the dependency relation of {noun+postposition, predicate}, and we analyze the accuracy and an effect of homograph for analysis of dependency relation. We used the Sejong Phrase Structured Corpus for experiment. We transformed the phrase structured corpus to dependency relation structure and tagged homograph. From the experiment, the accuracy of dependency relation by disambiguating homograph is 80.38%, the accuracy is increased by 0.42% compared with one of undisambiguated homograph. The Z-values in statistical hypothesis testing with significance level 1% is ${\mid}Z{\mid}=4.63{\geq}z_{0.01}=2.33$. So we can conclude that the homograph affects on analysis of dependency relation, and the stage transition training dictionary used in tagging POS and homograph affects 7.14% on the accuracy of dependency relation.

Korean Dependency Parsing Using Stack-Pointer Networks and Subtree Information (스택-포인터 네트워크와 부분 트리 정보를 이용한 한국어 의존 구문 분석)

  • Choi, Yong-Seok;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.6
    • /
    • pp.235-242
    • /
    • 2021
  • In this work, we develop a Korean dependency parser based on a stack-pointer network that consists of a pointer network and an internal stack. The parser has an encoder and decoder and builds a dependency tree for an input sentence in a depth-first manner. The encoder of the parser encodes an input sentence, and the decoder selects a child for the word at the top of the stack at each step. Since the parser has the internal stack where a search path is stored, the parser can utilize information of previously derived subtrees when selecting a child node. Previous studies used only a grandparent and the most recently visited sibling without considering a subtree structure. In this paper, we introduce graph attention networks that can represent a previously derived subtree. Then we modify our parser based on the stack-pointer network to utilize subtree information produced by the graph attention networks. After training the dependency parser using Sejong and Everyone's corpus, we evaluate the parser's performance. Experimental results show that the proposed parser achieves better performance than the previous approaches at sentence-level accuracies when adopting 2-depth graph attention networks.

Analysis of Korean Language Parsing System and Speed Improvement of Machine Learning using Feature Module (한국어 의존 관계 분석과 자질 집합 분할을 이용한 기계학습의 성능 개선)

  • Kim, Seong-Jin;Ock, Cheol-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.8
    • /
    • pp.66-74
    • /
    • 2014
  • Recently a variety of study of Korean parsing system is carried out by many software engineers and linguists. The parsing system mainly uses the method of machine learning or symbol processing paradigm. But the parsing system using machine learning has long training time because the data of Korean sentence is very big. And the system shows the limited recognition rate because the data has self error. In this thesis we design system using feature module which can reduce training time and analyze the recognized rate each the number of training sentences and repetition times. The designed system uses the separated modules and sorted table for binary search. We use the refined 36,090 sentences which is extracted by Sejong Corpus. The training time is decreased about three hours and the comparison of recognized rate is the highest as 84.54% when 10,000 sentences is trained 50 times. When all training sentence(32,481) is trained 10 times, the recognition rate is 82.99%. As a result it is more efficient that the system is used the refined data and is repeated the training until it became the steady state.

Ontology Construction and Its Application to Disambiguate Word Senses (온톨로지 구축 및 단어 의미 중의성 해소에의 활용)

  • Kang, Sin-Jae
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.491-500
    • /
    • 2004
  • This paper presents an ontology construction method using various computational language resources, and an ontology-based word sense disambiguation method. In order to acquire a reasonably practical ontology the Kadokawa thesaurus is extended by inserting additional semantic relations into its hierarchy, which are classified as case relations and other semantic relations. To apply the ontology to disambiguate word senses, we apply the previously-secured dictionary information to select the correct senses of some ambiguous words with high precision, and then use the ontology to disambiguate the remaining ambiguous words. The mutual information between concepts in the ontology was calculated before using the ontology as knowledge for disambiguating word senses. If mutual information is regarded as a weight between ontology concepts, the ontology can be treated as a graph with weighted edges, and then we locate the weighted path from one concept to the other concept. In our practical machine translation system, our word sense disambiguation method achieved a 9% improvement over methods which do not use ontology for Korean translation.

Sequence-to-sequence based Morphological Analysis and Part-Of-Speech Tagging for Korean Language with Convolutional Features (Sequence-to-sequence 기반 한국어 형태소 분석 및 품사 태깅)

  • Li, Jianri;Lee, EuiHyeon;Lee, Jong-Hyeok
    • Journal of KIISE
    • /
    • v.44 no.1
    • /
    • pp.57-62
    • /
    • 2017
  • Traditional Korean morphological analysis and POS tagging methods usually consist of two steps: 1 Generat hypotheses of all possible combinations of morphemes for given input, 2 Perform POS tagging search optimal result. require additional resource dictionaries and step could error to the step. In this paper, we tried to solve this problem end-to-end fashion using sequence-to-sequence model convolutional features. Experiment results Sejong corpus sour approach achieved 97.15% F1-score on morpheme level, 95.33% and 60.62% precision on word and sentence level, respectively; s96.91% F1-score on morpheme level, 95.40% and 60.62% precision on word and sentence level, respectively.

Syllable-based Korean POS Tagging Based on Combining a Pre-analyzed Dictionary with Machine Learning (기분석사전과 기계학습 방법을 결합한 음절 단위 한국어 품사 태깅)

  • Lee, Chung-Hee;Lim, Joon-Ho;Lim, Soojong;Kim, Hyun-Ki
    • Journal of KIISE
    • /
    • v.43 no.3
    • /
    • pp.362-369
    • /
    • 2016
  • This study is directed toward the design of a hybrid algorithm for syllable-based Korean POS tagging. Previous syllable-based works on Korean POS tagging have relied on a sequence labeling method and mostly used only a machine learning method. We present a new algorithm integrating a machine learning method and a pre-analyzed dictionary. We used a Sejong tagged corpus for training and evaluation. While the machine learning engine achieved eojeol precision of 0.964, the proposed hybrid engine achieved eojeol precision of 0.990. In a Quiz domain test, the machine learning engine and the proposed hybrid engine obtained 0.961 and 0.972, respectively. This result indicates our method to be effective for Korean POS tagging.

Korean Part-of-Speech Tagging using Disambiguation Rules for Ambiguous Word and Statistical Information (어휘별 중의성 제거 규칙과 통계 정보를 이용한 한국어 품사 태깅)

  • Ahn, Kwang-Mo;Han, Kyou-Youl;Seo, Young-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.2
    • /
    • pp.18-26
    • /
    • 2009
  • A hybrid part-of-speech tagging approaches may be robust, easily extendable, and accurate because they can have the advantages of both statistical approach and rule-based approach. But conventional hybrid part-of-speech tagging systems hardly resolve some morphological ambiguities which can't be resolved by statistical information. It is because the coverage of rules is narrow. So, we define disambiguation rules for individual ambiguous word based on syntax and semantics of surround words. We select words from which the top 50% of ambiguities are occurred in Sejong corpus and build 1,814 rules for them. The accuracy of our hybrid part-of-speech tagging system using those rules is 98.28%.

Korean Compound Noun Decomposition and Semantic Tagging System using User-Word Intelligent Network (U-WIN을 이용한 한국어 복합명사 분해 및 의미태깅 시스템)

  • Lee, Yong-Hoon;Ock, Cheol-Young;Lee, Eung-Bong
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.63-76
    • /
    • 2012
  • We propose a Korean compound noun semantic tagging system using statistical compound noun decomposition and semantic relation information extracted from a lexical semantic network(U-WIN) and dictionary definitions. The system consists of three phases including compound noun decomposition, semantic constraint, and semantic tagging. In compound noun decomposition, best candidates are selected using noun location frequencies extracted from a Sejong corpus, and re-decomposes noun for semantic constraint and restores foreign nouns. The semantic constraints phase finds possible semantic combinations by using origin information in dictionary and Naive Bayes Classifier, in order to decrease the computation time and increase the accuracy of semantic tagging. The semantic tagging phase calculates the semantic similarity between decomposed nouns and decides the semantic tags. We have constructed 40,717 experimental compound nouns data set from Standard Korean Language Dictionary, which consists of more than 3 characters and is semantically tagged. From the experiments, the accuracy of compound noun decomposition is 99.26%, and the accuracy of semantic tagging is 95.38% respectively.

The Unsupervised Learning-based Language Modeling of Word Comprehension in Korean

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.11
    • /
    • pp.41-49
    • /
    • 2019
  • We are to build an unsupervised machine learning-based language model which can estimate the amount of information that are in need to process words consisting of subword-level morphemes and syllables. We are then to investigate whether the reading times of words reflecting their morphemic and syllabic structures are predicted by an information-theoretic measure such as surprisal. Specifically, the proposed Morfessor-based unsupervised machine learning model is first to be trained on the large dataset of sentences on Sejong Corpus and is then to be applied to estimate the information-theoretic measure on each word in the test data of Korean words. The reading times of the words in the test data are to be recruited from Korean Lexicon Project (KLP) Database. A comparison between the information-theoretic measures of the words in point and the corresponding reading times by using a linear mixed effect model reveals a reliable correlation between surprisal and reading time. We conclude that surprisal is positively related to the processing effort (i.e. reading time), confirming the surprisal hypothesis.