• Title/Summary/Keyword: 한국어 어절

Search Result 364, Processing Time 0.022 seconds

An Analysis of Korean Dependency Relation by Homograph Disambiguation (동형이의어 분별에 의한 한국어 의존관계 분석)

  • Kim, Hong-Soon;Ock, Cheol-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.6
    • /
    • pp.219-230
    • /
    • 2014
  • An analysis of dependency relation is a job that determines the governor and the dependent between words in sentence. The dependency relation of predicate is established by patterns and selectional restriction of subcategorization of the predicate. This paper proposes a method of analysis of Korean dependency relation using homograph predicate disambiguated in morphology analysis phase. The disambiguated homograph predicates has each different pattern. Especially reusing a stage transition training dictionary used during tagging POS and homograph, we propose a method of fixing the dependency relation of {noun+postposition, predicate}, and we analyze the accuracy and an effect of homograph for analysis of dependency relation. We used the Sejong Phrase Structured Corpus for experiment. We transformed the phrase structured corpus to dependency relation structure and tagged homograph. From the experiment, the accuracy of dependency relation by disambiguating homograph is 80.38%, the accuracy is increased by 0.42% compared with one of undisambiguated homograph. The Z-values in statistical hypothesis testing with significance level 1% is ${\mid}Z{\mid}=4.63{\geq}z_{0.01}=2.33$. So we can conclude that the homograph affects on analysis of dependency relation, and the stage transition training dictionary used in tagging POS and homograph affects 7.14% on the accuracy of dependency relation.

The Method of the Evaluation of Verbal Lexical-Semantic Network Using the Automatic Word Clustering System (단어클러스터링 시스템을 이용한 어휘의미망의 활용평가 방안)

  • Kim, Hae-Gyung;Song, Mi-Young
    • Korean Journal of Oriental Medicine
    • /
    • v.12 no.3 s.18
    • /
    • pp.1-15
    • /
    • 2006
  • For the recent several years, there has been much interest in lexical semantic network. However, it seems to be very difficult to evaluate the effectiveness and correctness of it and invent the methods for applying it into various problem domains. In order to offer the fundamental ideas about how to evaluate and utilize lexical semantic networks, we developed two automatic word clustering systems, which are called system A and system B respectively. 68,455,856 words were used to learn both systems. We compared the clustering results of system A to those of system B which is extended by the lexical-semantic network. The system B is extended by reconstructing the feature vectors which are used the elements of the lexical-semantic network of 3,656 '-ha' verbs. The target data is the 'multilingual Word Net-CoreNet'.When we compared the accuracy of the system A and system B, we found that system B showed the accuracy of 46.6% which is better than that of system A, 45.3%.

  • PDF

Robust Part-of-Speech Tagger using Statistical and Rule-based Approach (통계와 규칙을 이용한 강인한 품사 태거)

  • Shim, Jun-Hyuk;Kim, Jun-Seok;Cha, Jong-Won;Lee, Geun-Bae
    • Annual Conference on Human and Language Technology
    • /
    • 1999.10d
    • /
    • pp.60-75
    • /
    • 1999
  • 품사 태깅은 자연 언어 처리의 가장 기본이 되는 부분으로 상위 자연 언어 처리 부분인 구문 분석, 의미 분석의 전처리로 사용되고, 독립된 응용으로 언어의 정보를 추출하거나 정보 검색 등의 응용에 사용되어 진다. 품사 태깅은 크게 통계에 기반한 방법, 규칙에 기반한 방법, 이 둘을 모두 이용하는 혼합형 방법 등으로 나누어 연구되고 있다. 포항공대 자연언어처리 연구실의 자연 언어 처리 엔진(SKOPE)의 품사 태깅 시스템 POSTAG는 미등록어 추정이 강화된 혼합형 품사 태깅 시스템이다 본 시스템은 형태소 분석기, 통계적 품사 태거, 에러 수정 규칙 후처리기로 구성되어 있다. 이들은 각각 단순히 직렬 연결되어 있는 것이 아니라 형태소 접속 테이블을 기준으로 분석 과정에서 형태소 접속 그래프를 생성하고 처리하면서 상호 밀접한 연관을 가진다. 그리고, 미등록어용 패턴사전에 의해 등록어와 동일한 방법으로 미등록어를 처리함으로써 효율적이고 강건한 품사 태깅을 한다. 한편, POSTAG에서 사용되는 태그세트와 한국전자통신연구원(ETRI)의 표준 태그세트 간에 양방향으로 태그세트 매핑을 함으로써, 표준 태그세트로 태깅된 코퍼스로부터 POSTAC를 위한 대용량 학습자료를 얻고 POSTAG에서 두 가지 태그세트로 품사 태깅 결과 출력이 가능하다. 본 시스템은 MATEC '99'에서 제공된 30000어절에 대하여 표준 태그세트로 출력한 결과 95%의 형태소단위 정확률을 보였으며, 태그세트 매핑을 제외한 POSTAG의 품사 태깅 결과 97%의 정확률을 보였다.

  • PDF

The automatic Lexical Knowledge acquisition using morpheme information and Clustering techniques (어절 내 형태소 출현 정보와 클러스터링 기법을 이용한 어휘지식 자동 획득)

  • Yu, Won-Hee;Suh, Tae-Won;Lim, Heui-Seok
    • The Journal of Korean Association of Computer Education
    • /
    • v.13 no.1
    • /
    • pp.65-73
    • /
    • 2010
  • This study offered lexical knowledge acquisition model of unsupervised learning method in order to overcome limitation of lexical knowledge hand building manual of supervised learning method for research of natural language processing. The offered model obtains the lexical knowledge from the lexical entry which was given by inputting through the process of vectorization, clustering, lexical knowledge acquisition automatically. In the process of obtaining the lexical knowledge acquisition of model, some parts of lexical knowledge dictionary which changes in the number of lexical knowledge and characteristics of lexical knowledge appeared by parameter changes were shown. The experimental results show that is possibility of automatic building of Machine-readable dictionary, because observed to the number of lexical class information cluster collected constant. also building of lexical ditionary including left-morphosyntactic information and right-morphosyntactic information is reflected korean characteristic.

  • PDF

Named Entity Recognition and Dictionary Construction for Korean Title: Books, Movies, Music and TV Programs (한국어 제목 개체명 인식 및 사전 구축: 도서, 영화, 음악, TV프로그램)

  • Park, Yongmin;Lee, Jae Sung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.7
    • /
    • pp.285-292
    • /
    • 2014
  • A named entity recognition method is used to improve the performance of information retrieval systems, question answering systems, machine translation systems and so on. The targets of the named entity recognition are usually PLOs (persons, locations and organizations). They are usually proper nouns or unregistered words, and traditional named entity recognizers use these characteristics to find out named entity candidates. The titles of books, movies and TV programs have different characteristics than PLO entities. They are sometimes multiple phrases, one sentence, or special characters. This makes it difficult to find the named entity candidates. In this paper we propose a method to quickly extract title named entities from news articles and automatically build a named entity dictionary for the titles. For the candidates identification, the word phrases enclosed with special symbols in a sentence are firstly extracted, and then verified by the SVM with using feature words and their distances. For the classification of the extracted title candidates, SVM is used with the mutual information of word contexts.

A Prosodic Study of Korean Using a Large Database (대용량 데이터베이스를 이용한 한국어 운율 특성에 관한 연구)

  • Kim Jong-Jin;Lee Sook-Hyang
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.2
    • /
    • pp.117-126
    • /
    • 2005
  • This study investigates the prosodic characteristics of Korean through the analysis of a large database. One female and one male speakers each read 650 sentences and they were segmentally and prosodically labeled. Statistical analyses were done on these utterances regarding the tonal pattern and the size of prosodic units, correlation between the size of higher level prosodic units and the number of lower level prosodic units. and the slope and F0 of the falling and rising contours of an accentual phrase. The results showed that the duration and the number of words and syllables of a prosodic unit were significantly different not only between speakers but also between its positions within a higher level prosodic nit. The munber of a prosodic unit showed a high correlation with the duration and the number of syllables of its higher level units. The slope of the falling contour within an accentual phrase was inversely Proportional to the number of its syllables. The slope was different depending on the first tone type of an accentual phrase, which could be explained with the F0 rising and the different amount of rising between tones when an accentual phrase starts with an H tone. The slope of the falling contour across an accentual phrase boundary showed a constant and larger value compared to one within an accentual phrase. The rising contours in the beginning and end of an accentual Phrase were similar in their slopes but they differ in the amount of F0 change : the former showed a larger amount of change. The slope of the rising contour which forms an accentual Phrase on its own was inversely Proportional to the number of its syllables.

The effects of Korean logical ending connective affix on text comprehension and recall (연결어미가 글 이해와 기억에 미치는 효과)

  • Nam, Ki-Chun;Kim, Hyun-Jeong;Park, Chang-Su;Whang, Yu-Mi;Kim, Young-Tae;Sim, Hyun-Sup
    • Annual Conference on Human and Language Technology
    • /
    • 2004.10d
    • /
    • pp.251-258
    • /
    • 2004
  • 본 연구는 연결어미가 글 이해와 기억에 미치는 영향을 조사하고, 연결어미의 효과와 글읽기 능력과는 어떤 관련성이 있는지를 조사하기 위해 실시되었다. 연결어미로는 인과 관계와 부가 관계를 나타내는 연결어미가 사용되었다. 앞뒤에 제시되는 두 문장의 국소적 응집성(Local coherence)을 형성하는데 연결어미가 도움을 준다면, 연결어미가 있는 경우에 문장을 이해하는 속도가 빨라지고 글 내용을 기억하는 데에도 도움을 줄 것으로 예측하였다. 만일에 글읽기 능력이 연결어미를 적절히 사용할 수 있는 능력에 의해서도 영향을 받는다면, 연결어미의 출현 여부와 읽기 능력간에 상호작용이 있을 것으로 예측하였다. 실험 1에서는 인과 관계 연결어미를 사용하여 문장 읽기 시간에 연결어미의 출현이 미치는 효과와 문장 회상에 미치는 효과를 조사하였다. 실험 결과, 인과 관계 연결어미는 뒤의 문장을 읽는데 촉진적인 효과를 주었으며, 이런 연결어미의 효과는 읽기 능력에 관계없이 일관된 촉진 효과를 나타냈다. 또한, 연결어미의 출현은 문장의 회상에 도움을 주었으며, 연결어미가 문장 회상에 미치는 효과는 읽기 능력의 상하에 관계없이 일관되게 나타났다. 실험 2에서는 부가 관계 연결어미가 문장 읽기 시간과 회상에 미치는 효과를 조사하였다. 실험 결과. 부가 관계 연결어미 역시 인과 관계 연결어미와 유사한 형태의 효과를 보였다. 실험 1과 실험 2의 결과는 인과 관계와 부가 관계 연결어미가 앞뒤 문장의 응집성 형성에 긍정적인 영향을 주고, 이런 연결어미의 글읽기에 대한 효과는 글읽기 능력에 관계없이 일정하다는 것을 시사한다.건이 복합 명사의 중심어 선택과 의미 결정에 재활용 될 수 있으며, 병렬말뭉치에 의해 반자동으로 구축되는 의미 대역 패턴을 사용하여 데이터 구축의 어려움을 개선하고자 한다. 및 산출 과정에 즉각적으로 활용될 수 있을 것이다. 또한, 이러한 정보들은 현재 구축중인 세종 전자사전에도 직접 반영되고 있다.teness)은 언화행위가 성공적이라는 것이다.[J. Searle] (7) 수로 쓰인 것(상수)(象數)과 시로 쓰인 것(의리)(義理)이 하나인 것은 그 나타난 것과 나타나지 않은 것들 사이에 어떠한 들도 없음을 말한다. [(성중영)(成中英)] (8) 공통의 규범의 공통성 속에 규범적인 측면이 벌써 있다. 공통성에서 개인적이 아닌 공적인 규범으로의 전이는 규범, 가치, 규칙, 과정, 제도로의 전이라고 본다. [C. Morrison] (9) 우리의 언어사용에 신비적인 요소를 부인할 수가 없다. 넓은 의미의 발화의미(utterance meaning) 속에 신비적인 요소나 애정표시도 수용된다. 의미분석은 지금 한글을 연구하고, 그 결과에 의존하여서 우리의 실제의 생활에 사용하는 $\ulcorner$한국어사전$\lrcorner$ 등을 만드는 과정에서, 어떤 의미에서 실험되었다고 말할 수가 있는 언어과학의 연구의 결과에 의존하여서 수행되는 철학적인 작업이다. 여기에서는 하나의 철학적인 연구의 시작으로 받아들여지는 이 의미분석의 문제를 반성하여 본다.반인과 다르다는 것이 밝혀졌다. 이 결과가 옳다면 한국의 심성 어휘집은 어절 문맥에 따라서 어간이나 어근 또는 활용형 그 자체로 이루어져

  • PDF

A Comparative study on the Effectiveness of Segmentation Strategies for Korean Word and Sentence Classification tasks (한국어 단어 및 문장 분류 태스크를 위한 분절 전략의 효과성 연구)

  • Kim, Jin-Sung;Kim, Gyeong-min;Son, Jun-young;Park, Jeongbae;Lim, Heui-seok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.39-47
    • /
    • 2021
  • The construction of high-quality input features through effective segmentation is essential for increasing the sentence comprehension of a language model. Improving the quality of them directly affects the performance of the downstream task. This paper comparatively studies the segmentation that effectively reflects the linguistic characteristics of Korean regarding word and sentence classification. The segmentation types are defined in four categories: eojeol, morpheme, syllable and subchar, and pre-training is carried out using the RoBERTa model structure. By dividing tasks into a sentence group and a word group, we analyze the tendency within a group and the difference between the groups. By the model with subchar-level segmentation showing higher performance than other strategies by maximal NSMC: +0.62%, KorNLI: +2.38%, KorSTS: +2.41% in sentence classification, and the model with syllable-level showing higher performance at maximum NER: +0.7%, SRL: +0.61% in word classification, the experimental results confirm the effectiveness of those schemes.

An Analysis of Linguistic Features in Science Textbooks across Grade Levels: Focus on Text Cohesion (과학교과서의 학년 간 언어적 특성 분석 -텍스트 정합성을 중심으로-)

  • Ryu, Jisu;Jeon, Moongee
    • Journal of The Korean Association For Science Education
    • /
    • v.41 no.2
    • /
    • pp.71-82
    • /
    • 2021
  • Learning efficiency can be maximized by careful matching of text features to expected reader features (i.e., linguistic and cognitive abilities, and background knowledge). The present study aims to explore whether this systematic principle is reflected in the development of science textbooks. The current study examined science textbook texts on 20 measures provided by Auto-Kohesion, a Korean language analysis tool. In addition to surface-level features (basic counts, word-related measures, syntactic complexity measures) which have been commonly used in previous text analysis studies, the present study included cohesion-related features as well (noun overlap ratios, connectives, pronouns). The main findings demonstrate that the surface measures (e.g., word and sentence length, word frequency) overall increased in complexity with grade levels, whereas the majority of the other measures, particularly cohesion-related measures, did not systematically vary across grade levels. The current results suggest that students of lower grades are expected to experience learning difficulties and lowered motivation due to the challenging texts. Textbooks are also not likely to be suitable for students of higher grades to develop the ability to process difficulty level texts required for higher education. The current study suggests that various text-related features including cohesion-related measures need to be carefully considered in the process of textbook development.

Exploiting Chunking for Dependency Parsing in Korean (한국어에서 의존 구문분석을 위한 구묶음의 활용)

  • Namgoong, Young;Kim, Jae-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.7
    • /
    • pp.291-298
    • /
    • 2022
  • In this paper, we present a method for dependency parsing with chunking in Korean. Dependency parsing is a task of determining a governor of every word in a sentence. In general, we used to determine the syntactic governor in Korean and should transform the syntactic structure into semantic structure for further processing like semantic analysis in natural language processing. There is a notorious problem to determine whether syntactic or semantic governor. For example, the syntactic governor of the word "먹고 (eat)" in the sentence "밥을 먹고 싶다 (would like to eat)" is "싶다 (would like to)", which is an auxiliary verb and therefore can not be a semantic governor. In order to mitigate this somewhat, we propose a Korean dependency parsing after chunking, which is a process of segmenting a sentence into constituents. A constituent is a word or a group of words that function as a single unit within a dependency structure and is called a chunk in this paper. Compared to traditional dependency parsing, there are some advantage of the proposed method: (1) The number of input units in parsing can be reduced and then the parsing speed could be faster. (2) The effectiveness of parsing can be improved by considering the relation between two head words in chunks. Through experiments for Sejong dependency corpus, we have shown that the USA and LAS of the proposed method are 86.48% and 84.56%, respectively and the number of input units is reduced by about 22%p.