• Title/Summary/Keyword: Text Chunking

Search Result 12, Processing Time 0.023 seconds

Text Chunking by Rule and Lexical Information (규칙과 어휘정보를 이용한 한국어 문장의 구묶음(Chunking))

  • 김미영;강신재;이종혁
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2000.06a
    • /
    • pp.103-109
    • /
    • 2000
  • 본 논문은 효율적인 한국어 구문분석을 위해 먼저 구묶음 분석(Chunking) 과정을 적용할 것을 제안한다. 한국어는 어순이 자유롭지만 명사구와 동사구에서는 규칙적인 어순을 발견할 수 있으므로, 규칙을 이용한 구묶음(Chunking) 과정의 적용이 가능하다. 하지만, 규칙만으로는 명사구와 동사구의 묶음에 한계가 있으므로 실험 말뭉치에서 어휘 정보를 찾아내어 구묶음 과정(Chunking)에 적용한다. 기존의 구문분석 방법은 구구조문법과 의존문법에 기반한 것이 대부분인데, 이러한 구문분석은 다양한 결과들이 분석되는 동안 많은 시간이 소요되며 이 중 잘못된 분석 결과를 가려서 삭제하기(pruning)도 어렵다. 따라서 본 논문에서 제시한 구묶음(Chunking) 과정을 적용함으로써, 잘못된 구문분석 결과를 미연에 방지하고 의존문법을 적용한 구문분석에 있어서 의존관계의 설정 범위(scope)도 제한할 수 있다.

  • PDF

Text Chunking by Rule and Lexical Information (규칙과 어휘정보를 이용한 한국어 문장의 구묶음(Chunking))

  • Kim, Mi-Young;Kang, Sin-Jae;Lee, Jong-Hyeok
    • Annual Conference on Human and Language Technology
    • /
    • 2000.10d
    • /
    • pp.103-109
    • /
    • 2000
  • 본 논문은 효율적인 한국어 구문분석을 위해 먼저 구묶음 분석(Chunking) 과정을 적용할 것을 제안한다. 한국어는 어순이 자유롭지만 명사구와 동사구에서는 규칙적인 어순을 발견할 수 있으므로, 규칙을 이용한 구묶음(Chunking) 과정의 적용이 가능하다 하지만, 규칙만으로는 명사구와 동사구의 묶음에 한계가 있으므로 실험 말뭉치에서 어휘 정보를 찾아내어 구묶음 과정(Chunking)에 적용한다. 기존의 구문분석 방법은 구구조문법과 의존문법에 기반한 것이 대부분인데, 이러한 구문분석은 다양한 결과들이 분석되는 동안 많은 시간이 소요되며 이 중 잘못된 분석 결과를 가려서 삭제하기(pruning)도 어렵다. 따라서 본 논문에서 제시한 구묶음(Chunking) 과정을 적용함으로써, 잘못된 구문분석 결과를 미연에 방지하고 의존문법을 적용한 구문분석에 있어서 의존관계의 설정 범위(scope)도 제한할 수 있다.

  • PDF

A Hybrid of Rule based Method and Memory based Loaming for Korean Text Chunking (한국어 구 단위화를 위한 규칙 기반 방법과 기억 기반 학습의 결합)

  • 박성배;장병탁
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.3
    • /
    • pp.369-378
    • /
    • 2004
  • In partially free word order languages like Korean and Japanese, the rule-based method is effective for text chunking, and shows the performance as high as machine learning methods even with a few rules due to the well-developed overt Postpositions and endings. However, it has no ability to handle the exceptions of the rules. Exception handling is an important work in natural language processing, and the exceptions can be efficiently processed in memory-based teaming. In this paper, we propose a hybrid of rule-based method and memory-based learning for Korean text chunking. The proposed method is primarily based on the rules, and then the chunks estimated by the rules are verified by memory-based classifier. An evaluation of the proposed method on Korean STEP 2000 corpus yields the improvement in F-score over the rules or various machine teaming methods alone. The final F-score is 94.19, while those of the rules and SVMs, the best machine learning method for this task, are just 91.87 and 92.54 respectively.

Improving the Performance of Korean Text Chunking by Machine learning Approaches based on Feature Set Selection (자질집합선택 기반의 기계학습을 통한 한국어 기본구 인식의 성능향상)

  • Hwang, Young-Sook;Chung, Hoo-jung;Park, So-Young;Kwak, Young-Jae;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.9
    • /
    • pp.654-668
    • /
    • 2002
  • In this paper, we present an empirical study for improving the Korean text chunking based on machine learning and feature set selection approaches. We focus on two issues: the problem of selecting feature set for Korean chunking, and the problem of alleviating the data sparseness. To select a proper feature set, we use a heuristic method of searching through the space of feature sets using the estimated performance from a machine learning algorithm as a measure of "incremental usefulness" of a particular feature set. Besides, for smoothing the data sparseness, we suggest a method of using a general part-of-speech tag set and selective lexical information under the consideration of Korean language characteristics. Experimental results showed that chunk tags and lexical information within a given context window are important features and spacing unit information is less important than others, which are independent on the machine teaming techniques. Furthermore, using the selective lexical information gives not only a smoothing effect but also the reduction of the feature space than using all of lexical information. Korean text chunking based on the memory-based learning and the decision tree learning with the selected feature space showed the performance of precision/recall of 90.99%/92.52%, and 93.39%/93.41% respectively.

A Corpus-based Lexical Analysis of the Speech Texts: A Collocational Approach

  • Kim, Nahk-Bohk
    • English Language & Literature Teaching
    • /
    • v.15 no.3
    • /
    • pp.151-170
    • /
    • 2009
  • Recently speech texts have been increasingly used for English education because of their various advantages as language teaching and learning materials. The purpose of this paper is to analyze speech texts in a corpus-based lexical approach, and suggest some productive methods which utilize English speaking or writing as the main resource for the course, along with introducing the actual classroom adaptations. First, this study shows that a speech corpus has some unique features such as different selections of pronouns, nouns, and lexical chunks in comparison to a general corpus. Next, from a collocational perspective, the study demonstrates that the speech corpus consists of a wide variety of collocations and lexical chunks which a number of linguists describe (Lewis, 1997; McCarthy, 1990; Willis, 1990). In other words, the speech corpus suggests that speech texts not only have considerable lexical potential that could be exploited to facilitate chunk-learning, but also that learners are not very likely to unlock this potential autonomously. Based on this result, teachers can develop a learners' corpus and use it by chunking the speech text. This new approach of adapting speech samples as important materials for college students' speaking or writing ability should be implemented as shown in samplers. Finally, to foster learner's productive skills more communicatively, a few practical suggestions are made such as chunking and windowing chunks of speech and presentation, and the pedagogical implications are discussed.

  • PDF

Learning Text Chunking Using Maximum Entropy Models (최대 엔트로피 모델을 이용한 텍스트 단위화 학습)

  • Park, Seong-Bae;Zhang, Byoung-Tak
    • Annual Conference on Human and Language Technology
    • /
    • 2001.10d
    • /
    • pp.130-137
    • /
    • 2001
  • 최대 엔트로피 모델(maximum entropy model)은 여러 가지 자연언어 문제를 학습하는데 성공적으로 적용되어 왔지만, 두 가지의 주요한 문제점을 가지고 있다. 그 첫번째 문제는 해당 언어에 대한 많은 사전 지식(prior knowledge)이 필요하다는 것이고, 두번째 문제는 계산량이 너무 많다는 것이다. 본 논문에서는 텍스트 단위화(text chunking)에 최대 엔트로피 모델을 적용하는 데 나타나는 이 문제점들을 해소하기 위해 새로운 방법을 제시한다. 사전 지식으로, 간단한 언어 모델로부터 쉽게 생성된 결정트리(decision tree)에서 자동적으로 만들어진 규칙을 사용한다. 따라서, 제시된 방법에서의 최대 엔트로피 모델은 결정트리를 보강하는 방법으로 간주될 수 있다. 계산론적 복잡도를 줄이기 위해서, 최대 엔트로피 모델을 학습할 때 일종의 능동 학습(active learning) 방법을 사용한다. 전체 학습 데이터가 아닌 일부분만을 사용함으로써 계산 비용은 크게 줄어 들 수 있다. 실험 결과, 제시된 방법으로 결정트리의 오류의 수가 반으로 줄었다. 대부분의 자연언어 데이터가 매우 불균형을 이루므로, 학습된 모델을 부스팅(boosting)으로 강화할 수 있다. 부스팅을 한 후 제시된 방법은 전문가에 의해 선택된 자질로 학습된 최대 엔트로피 모델보다 졸은 성능을 보이며 지금까지 보고된 기계 학습 알고리즘 중 가장 성능이 좋은 방법과 비슷한 성능을 보인다 텍스트 단위화가 일반적으로 전체 구문분석의 전 단계이고 이 단계에서의 오류가 다음 단계에서 복구될 수 없으므로 이 성능은 텍스트 단위화에서 매우 의미가 길다.

  • PDF

Examining Line-breaks in Korean Language Textbooks: the Promotion of Word Spacing and Reading Skills (한국어 교재의 행 바꾸기 -띄어쓰기와 읽기 능력의 계발 -)

  • Cho, In Jung;Kim, Danbee
    • Journal of Korean language education
    • /
    • v.23 no.1
    • /
    • pp.77-100
    • /
    • 2012
  • This study investigates issues in relation to text segmenting, in particular, line breaks in Korean language textbooks. Research on L1 and L2 reading has shown that readers process texts by chunking (grouping words into phrases or meaningful syntactic units) and, therefore, phrase-cued texts are helpful for readers whose syntactic knowledge has not yet been fully developed. In other words, it would be important for language textbooks to avoid awkward syntactic divisions at the end of a line, in particular, those textbooks for beginners and intermediate level learners. According to our analysis of a number of major Korean language textbooks for beginner-level learners, however, many textbooks were found to display line-breaks of awkward syntactic division. Moreover, some textbooks displayed frequent instances where a single word (or eojeol in the case of Korean) is split between different lines. This can hamper not only learners' learning of the rules of spaces between eojeols in Korean, but also learners' development in automatic word recognition, which is an essential part of reading processes. Based on the findings of our textbook analysis and of existing research on reading, this study suggests ways to overcome awkward line-breaks in Korean language textbooks.

Decision-Tree-Based Markov Model for Phrase Break Prediction

  • Kim, Sang-Hun;Oh, Seung-Shin
    • ETRI Journal
    • /
    • v.29 no.4
    • /
    • pp.527-529
    • /
    • 2007
  • In this paper, a decision-tree-based Markov model for phrase break prediction is proposed. The model takes advantage of the non-homogeneous-features-based classification ability of decision tree and temporal break sequence modeling based on the Markov process. For this experiment, a text corpus tagged with parts-of-speech and three break strength levels is prepared and evaluated. The complex feature set, textual conditions, and prior knowledge are utilized; and chunking rules are applied to the search results. The proposed model shows an error reduction rate of about 11.6% compared to the conventional classification model.

  • PDF

Resolving Prepositional Phrase Attachment Using a Maximum Entropy Boosting Model (최대 엔트로피 부스팅 모델을 이용한 전치사 접속 모호성 해소)

  • 박성배;장병탁
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10d
    • /
    • pp.670-672
    • /
    • 2002
  • Park과 Zhang은 최대 엔트로피 모델(maximum entropy model)을 실제 자연언어 처리에 적용함에 있어서 나타날 수 있는 여러가지 문제를 해결하기 위한 최대 엔트로피 모델(maximum entropy boosting model)을 제시하여 문서 단위화(text chunking)에 성공적으로 적용하였다. 최대 엔트로피 부스팅 모델은 쉬운 모델링과 높은 성능을 보이는 장점을 가지고 있다. 본 논문에서는 최대 엔트로피 부스팅 모델을 영어 전치사 접속 모호성 해소에 적용한다. Wall Street Journal 말뭉치에 대한 실험 결과, 아주 작은 노력을 들였음에도 84.3%의 성능을 보여 지금까지 알려진 최고의 성능과 비슷한 결과를 보였다.

  • PDF

Consolidation of Subtasks for Target Task in Pipelined NLP Model

  • Son, Jeong-Woo;Yoon, Heegeun;Park, Seong-Bae;Cho, Keeseong;Ryu, Won
    • ETRI Journal
    • /
    • v.36 no.5
    • /
    • pp.704-713
    • /
    • 2014
  • Most natural language processing tasks depend on the outputs of some other tasks. Thus, they involve other tasks as subtasks. The main problem of this type of pipelined model is that the optimality of the subtasks that are trained with their own data is not guaranteed in the final target task, since the subtasks are not optimized with respect to the target task. As a solution to this problem, this paper proposes a consolidation of subtasks for a target task ($CST^2$). In $CST^2$, all parameters of a target task and its subtasks are optimized to fulfill the objective of the target task. $CST^2$ finds such optimized parameters through a backpropagation algorithm. In experiments in which text chunking is a target task and part-of-speech tagging is its subtask, $CST^2$ outperforms a traditional pipelined text chunker. The experimental results prove the effectiveness of optimizing subtasks with respect to the target task.