• Title/Summary/Keyword: POS-Tagging

Search Result 73, Processing Time 0.026 seconds

POS-Tagging Model Combining Rules and Word Probability (규칙과 어절 확률을 이용한 혼합 품사 태깅 모델)

  • Hwang, Myeong-Jin;Kang, Mi-Young;Kwon, Hyuk-Chul
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10b
    • /
    • pp.11-15
    • /
    • 2006
  • 본 논문은, 긍정적 가중치와 부정적 가중치를 통해 표현되는 규칙에 기반을 둔 품사 태깅 모델과, 형태 소 unigram 정보와 어절 내의 카테고리 패턴에 기반하여 어절 확률을 추정하는 품사 태깅 모델의 장점을 취하고 단점을 보완할 수 있는 혼합 품사 태깅 모델을 제안한다. 이 혼합 모델은 먼저, 규칙에 기반한 품사 태깅을 적용한 후, 규칙이 해결하지 못한 결과에 대해서 통계적인 기법을 사용하여 품사 태깅을 한다. 본 연구는 어절 내 카테고리 패턴정보에 따른 파라미터 set과 형태소 unigram만을 이용해 어절 확률을 계산해 내므로 다른 통계기반 접근방법에서와는 달리 작은 크기의 통계사전만을 필요로 하며, 카테고리 패턴 정보를 사용함으로써 통계기반 접근 방법의 가장 큰 문제점인 data sparseness 문제 또한 줄일 수 있다는 이점이 있다. 특히, 본 논문에서 사용할 통계 모델은 어절 확률에 기반을 두고 있기 때문에 한국어의 특성을 잘 반영할 수 있다. 본 논문에서 제안한 혼합 모델은 규칙이 적용된 후에도 후보열이 둘 이상 남아 오류로 반환되었던 어절 중 24%를 개선한다.

  • PDF

Korean POS and Homonym Tagging System using HMM (HMM을 이용한 한국어 품사 및 동형이의어 태깅 시스템)

  • Kim, Dong-Myoung;Bae, Young-Jun;Ock, Cheol-Young;Choi, Ho-Soep;Kim, Chang-Hwan
    • Annual Conference on Human and Language Technology
    • /
    • 2008.10a
    • /
    • pp.12-16
    • /
    • 2008
  • 기존의 자연언어처리 연구 중 품사 태깅과 동형이의어 태깅은 별개의 문제로 취급되었다. 그로 인해 두 문제를 해결하기 위한 모델 역시 서로 다른 모델을 사용하였다. 이에 본 논문은 품사 태깅 문제와 동형이의어 태깅 문제는 모두 문맥의 정보에 의존함에 착안하여 은닉마르코프모델을 이용하여 두 가지 문제를 해결하는 시스템을 구현하였다. 제안한 시스템은 품사 및 동형이의어 태깅된 세종 말뭉치 1100만여 어절에 대해 unigram과 bigram을 추출 하였고, unigram을 이용하여 어절의 생성확률 사전을 구축하고 bigram을 이용하여 전이확률 사전을 구축하였다. 구현된 시스템의 성능 확인을 위해 비학습 말뭉치 261,360 어절에 대해 실험하였고, 실험결과 품사 태깅 99.74%, 동형이의어 태깅 97.41%, 품사 및 동형이의어 태깅 97.78%의 정확률을 보였다.

  • PDF

A Dynamic Link Model for Korean POS-Tagging (한국어 품사 태깅을 위한 다이내믹 링크 모델)

  • Hwang, Myeong-Jin;Kang, Mi-Young;Kwon, Hyuk-Chul
    • Annual Conference on Human and Language Technology
    • /
    • 2007.10a
    • /
    • pp.282-289
    • /
    • 2007
  • 통계를 이용한 품사 태깅에서는 자료부족 문제가 이슈가 된다. 한국어나 터키어와 같은 교착어는 어절(word)이 다수 형태소로 구성되어 있어서 자료부족 문제가 더 심각하다. 이러한 문제를 극복하고자 교착어 문장을 어절 열이 아니라 형태소의 열이라 가정한 연구도 있었으나, 어절 특성이 사라지기 때문에 파생에 의한 어절의 문법 범주 변화 등의 통계정보와 어절 간의 통계정보를 구하기 어렵다. 본 논문은 효율적인 어절 간 전이확률 계산 방법론을 고안함으로써 어절 단위의 정보를 유지하면서도 자료부족문제를 해결할 수 있는 확률 모델을 제안한다. 즉, 한국어의 형태통사적인 특성을 고려하면 앞 어절의 마지막 형태소와 함께 뒤 어절의 처음 혹은 끝 형태소-즉 두 개의 어절 간 전이 링크만으로도 어절 간 전이확률 계산 시 필요한 대부분 정보를 얻을 수 있고, 문맥에 따라 두 링크 중 하나만 필요하다는 관찰을 토대로 규칙을 이용해 두전이링크 중 하나를 선택해 전이확률 계산에 사용하는 '다이내믹 링크 모델'을 제안한다. 형태소 품사 bi-gram만을 사용하는 이 모델은 실험 말뭉치에 대해 96.60%의 정확도를 보인다. 이는 같은 말뭉치에 대해 형태소 품사 tri-gram 등의 더 많은 문맥 정보를 사용하는 다른 모델을 평가했을 때와 대등한 성능이다.

  • PDF

Terms Based Sentiment Classification for Online Review Using Support Vector Machine (Support Vector Machine을 이용한 온라인 리뷰의 용어기반 감성분류모형)

  • Lee, Taewon;Hong, Taeho
    • Information Systems Review
    • /
    • v.17 no.1
    • /
    • pp.49-64
    • /
    • 2015
  • Customer reviews which include subjective opinions for the product or service in online store have been generated rapidly and their influence on customers has become immense due to the widespread usage of SNS. In addition, a number of studies have focused on opinion mining to analyze the positive and negative opinions and get a better solution for customer support and sales. It is very important to select the key terms which reflected the customers' sentiment on the reviews for opinion mining. We proposed a document-level terms-based sentiment classification model by select in the optimal terms with part of speech tag. SVMs (Support vector machines) are utilized to build a predictor for opinion mining and we used the combination of POS tag and four terms extraction methods for the feature selection of SVM. To validate the proposed opinion mining model, we applied it to the customer reviews on Amazon. We eliminated the unmeaning terms known as the stopwords and extracted the useful terms by using part of speech tagging approach after crawling 80,000 reviews. The extracted terms gained from document frequency, TF-IDF, information gain, chi-squared statistic were ranked and 20 ranked terms were used to the feature of SVM model. Our experimental results show that the performance of SVM model with four POS tags is superior to the benchmarked model, which are built by extracting only adjective terms. In addition, the SVM model based on Chi-squared statistic for opinion mining shows the most superior performance among SVM models with 4 different kinds of terms extraction method. Our proposed opinion mining model is expected to improve customer service and gain competitive advantage in online store.

Two Statistical Models for Automatic Word Spacing of Korean Sentences (한글 문장의 자동 띄어쓰기를 위한 두 가지 통계적 모델)

  • 이도길;이상주;임희석;임해창
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.358-371
    • /
    • 2003
  • Automatic word spacing is a process of deciding correct boundaries between words in a sentence including spacing errors. It is very important to increase the readability and to communicate the accurate meaning of text to the reader. The previous statistical approaches for automatic word spacing do not consider the previous spacing state, and thus can not help estimating inaccurate probabilities. In this paper, we propose two statistical word spacing models which can solve the problem of the previous statistical approaches. The proposed models are based on the observation that the automatic word spacing is regarded as a classification problem such as the POS tagging. The models can consider broader context and estimate more accurate probabilities by generalizing hidden Markov models. We have experimented the proposed models under a wide range of experimental conditions in order to compare them with the current state of the art, and also provided detailed error analysis of our models. The experimental results show that the proposed models have a syllable-unit accuracy of 98.33% and Eojeol-unit precision of 93.06% by the evaluation method considering compound nouns.

Utilizing Various Natural Language Processing Techniques for Biomedical Interaction Extraction

  • Park, Kyung-Mi;Cho, Han-Cheol;Rim, Hae-Chang
    • Journal of Information Processing Systems
    • /
    • v.7 no.3
    • /
    • pp.459-472
    • /
    • 2011
  • The vast number of biomedical literature is an important source of biomedical interaction information discovery. However, it is complicated to obtain interaction information from them because most of them are not easily readable by machine. In this paper, we present a method for extracting biomedical interaction information assuming that the biomedical Named Entities (NEs) are already identified. The proposed method labels all possible pairs of given biomedical NEs as INTERACTION or NO-INTERACTION by using a Maximum Entropy (ME) classifier. The features used for the classifier are obtained by applying various NLP techniques such as POS tagging, base phrase recognition, parsing and predicate-argument recognition. Especially, specific verb predicates (activate, inhibit, diminish and etc.) and their biomedical NE arguments are very useful features for identifying interactive NE pairs. Based on this, we devised a twostep method: 1) an interaction verb extraction step to find biomedically salient verbs, and 2) an argument relation identification step to generate partial predicate-argument structures between extracted interaction verbs and their NE arguments. In the experiments, we analyzed how much each applied NLP technique improves the performance. The proposed method can be completely improved by more than 2% compared to the baseline method. The use of external contextual features, which are obtained from outside of NEs, is crucial for the performance improvement. We also compare the performance of the proposed method against the co-occurrence-based and the rule-based methods. The result demonstrates that the proposed method considerably improves the performance.

E-commerce data based Sentiment Analysis Model Implementation using Natural Language Processing Model (자연어처리 모델을 이용한 이커머스 데이터 기반 감성 분석 모델 구축)

  • Choi, Jun-Young;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.11
    • /
    • pp.33-39
    • /
    • 2020
  • In the field of Natural Language Processing, Various research such as Translation, POS Tagging, Q&A, and Sentiment Analysis are globally being carried out. Sentiment Analysis shows high classification performance for English single-domain datasets by pretrained sentence embedding models. In this thesis, the classification performance is compared by Korean E-commerce online dataset with various domain attributes and 6 Neural-Net models are built as BOW (Bag Of Word), LSTM[1], Attention, CNN[2], ELMo[3], and BERT(KoBERT)[4]. It has been confirmed that the performance of pretrained sentence embedding models are higher than word embedding models. In addition, practical Neural-Net model composition is proposed after comparing classification performance on dataset with 17 categories. Furthermore, the way of compressing sentence embedding model is mentioned as future work, considering inference time against model capacity on real-time service.

Detecting Inconsistent Code Identifiers (코드 비 일관적 식별자 검출 기법)

  • Lee, Sungnam;Kim, Suntae;Park, Sooyoung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.5
    • /
    • pp.319-328
    • /
    • 2013
  • Software maintainers try to comprehend software source code by intensively using source code identifiers. Thus, use of inconsistent identifiers throughout entire source code causes to increase cost of software maintenance. Although participants can adopt peer reviews to handle this problem, it might be impossible to go through entire source code if the volume of code is huge. This paper introduces an approach to automatically detecting inconsistent identifiers of Java source code. This approach consists of tokenizing and POS tagging all identifiers in the source code, classifying syntactic and semantic similar terms, and finally detecting inconsistent identifiers by applying proposed rules. In addition, we have developed tool support, named CodeAmigo, to support the proposed approach. We applied it to two popular Java based open source projects in order to show feasibility of the approach by computing precision.

Predicate Recognition Method using BiLSTM Model and Morpheme Features (BiLSTM 모델과 형태소 자질을 이용한 서술어 인식 방법)

  • Nam, Chung-Hyeon;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.24-29
    • /
    • 2022
  • Semantic role labeling task used in various natural language processing fields, such as information extraction and question answering systems, is the task of identifying the arugments for a given sentence and predicate. Predicate used as semantic role labeling input are extracted using lexical analysis results such as POS-tagging, but the problem is that predicate can't extract all linguistic patterns because predicate in korean language has various patterns, depending on the meaning of sentence. In this paper, we propose a korean predicate recognition method using neural network model with pre-trained embedding models and lexical features. The experiments compare the performance on the hyper parameters of models and with or without the use of embedding models and lexical features. As a result, we confirm that the performance of the proposed neural network model was 92.63%.

PPEditor: Semi-Automatic Annotation Tool for Korean Dependency Structure (PPEditor: 한국어 의존구조 부착을 위한 반자동 말뭉치 구축 도구)

  • Kim Jae-Hoon;Park Eun-Jin
    • The KIPS Transactions:PartB
    • /
    • v.13B no.1 s.104
    • /
    • pp.63-70
    • /
    • 2006
  • In general, a corpus contains lots of linguistic information and is widely used in the field of natural language processing and computational linguistics. The creation of such the corpus, however, is an expensive, labor-intensive and time-consuming work. To alleviate this problem, annotation tools to build corpora with much linguistic information is indispensable. In this paper, we design and implement an annotation tool for establishing a Korean dependency tree-tagged corpus. The most ideal way is to fully automatically create the corpus without annotators' interventions, but as a matter of fact, it is impossible. The proposed tool is semi-automatic like most other annotation tools and is designed to edit errors, which are generated by basic analyzers like part-of-speech tagger and (partial) parser. We also design it to avoid repetitive works while editing the errors and to use it easily and friendly. Using the proposed annotation tool, 10,000 Korean sentences containing over 20 words are annotated with dependency structures. For 2 months, eight annotators have worked every 4 hours a day. We are confident that we can have accurate and consistent annotations as well as reduced labor and time.