• Title/Summary/Keyword: as구문

Search Result 425, Processing Time 0.02 seconds

Parser as An Analysis Finisher (분석의 최종 판단자로서의 구문 분석기)

  • Yuh, Sang Hwa
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.05a
    • /
    • pp.677-680
    • /
    • 2004
  • 통상적인 언어 처리의 분석 과정은 전처리, 형태소분석, 품사 태깅, 복합 단위 인식, 구문 분석, 그리고 의미 분석 등의 여러 단계로 이루어진다. 분석의 매 단계에서 중의성(Ambiguity)가 발생하며, 이를 해결하기 위한 노력으로 구문 분석 이전의 분석 단계에서도 정확률(Precision)을 높이기 위해, 어휘(Lexical) 정보, 품사정보 그리고 구문 정보 등을 이용한다. 각 단계에서 고급 정보로서의 구문 정보 이용은 구문분석의 중복성과 분석 지식의 중복성을 야기한다. 또한, 기존의 처리 흐름에서는 각 분석 단계에서의 결과는 최종적인 것으로, 이로 인해 다음 분석 단계에 분석 오류를 전파한다. 본 논문에서는 구문 분석기를 분석 결과의 최종 판단자로 이용할 것을 제안한다. 즉, 구문 분석 전단계의 모든 분석 정보는 구문 분석기에 제공되고, 구문분석기는 상향식 구문분석을 수행하면서 이들 정보들로부터 최종의 그리고 최적의 분석 후보를 결정한다. 이를 위해 구문분석기는 한 문장 단위를 입력 받는 기존의 제한을 따르지 않는다. 제안된 방법은 구문분석 앞 단계에서의 잘못된 정보 제공(예: 문장 분리 오류, 품사 오류, 복합단위 인식 오류 등)으로부터 자유로우며, 이를 통해 분석 실패의 가능성을 최대로 줄인다.

  • PDF

Resolving the Ambiguities of Negative Stripping Construction in English : A Direct Interpretation Approach (영어 부정 스트리핑 구문의 중의성 해소에 관한 연구: 직접 해석 접근법을 중심으로)

  • Kim, So-jee;Cho, Sae-youn
    • Cross-Cultural Studies
    • /
    • v.52
    • /
    • pp.393-416
    • /
    • 2018
  • Negative Stripping Construction in English involves the disjunction but, the adverb not, and a constituent NP. This construction is an incomplete sentence although it delivers a complete sentential meaning. Interpretation of this construction may be ambiguous in that the constituent NP can either be construed as the subject, or as the complements including the object. To generate such sentences and resolve the issue of ambiguity, we propose a construction-based analysis under direct interpretation approach, rejecting previous analyses based on deletion approaches. In so doing, we suggest a negative stripping construction rule that can account for ambiguous meaning. This rule further can enable us to explain syntactic structures and readings of Negative Stripping Construction.

Korean Syntactic Rules using Composite Labels (복합 레이블을 적용한 한국어 구문 규칙)

  • 김성용;이공주;최기선
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.235-244
    • /
    • 2004
  • We propose a format of a binary phrase structure grammar with composite labels. The grammar adopts binary rules so that the dependency between two sub-trees can be represented in the label of the tree. The label of a tree is composed of two attributes, each of which is extracted from each sub-tree so that it can represent the compositional information of the tree. The composite label is generated from part-of-speech tags using an automatic labeling algorithm. Since the proposed rule description scheme is binary and uses only part-of-speech information, it can readily be used in dependency grammar and be applied to other languages as well. In the best-1 context-free cross validation on 31,080 tree-tagged corpus, the labeled precision is 79.30%, which outperforms phrase structure grammar and dependency grammar by 5% and by 4%, respectively. It shows that the proposed rule description scheme is effective for parsing Korean.

Syntax Analysis of Korean Based on Clausal Segmentation using Sentence Patterns Information as a Constraint (문형을 제약 조건으로 하는 단문 분할 기반 한국어 구문분석)

  • Lee, Hyeon-Yeong;Lee, Yong-Seok
    • Annual Conference on Human and Language Technology
    • /
    • 2006.10e
    • /
    • pp.140-147
    • /
    • 2006
  • 한국어 문장은 하나 이상의 용언으로 인해 구문 분석 과정에서 다양한 구문 모호성이 발생한다. 이들 중 대부분은 내포문의 수식 범위로부터 발생되는 구 부착의 문제 때문이다. 이런 구운 모호성은 내포문의 범위를 정해서 하나의 구문 범주의 기능을 가지도록 하면 해결할 수가 있다. 본 논문에서는 내포문의 범위를 정하기 위해서 문형과 한국어의 구문 특성을 이용한다. 먼저, 내포문에 있는 용언의 문형 정보가 가질 수 있는 필수격을 최대로 부착하여 내포문의 범위를 정해서 단문으로 분할한다. 그리고 한국어의 구문 특성을 이용해서 분할된 내포문의 기능을 하나의 구문 범주인 체언구나 부사구로 변환한다. 이렇게 함으로써 복합문의 구성 형태가 단문 구조로 변환되기 때문에 내포문의 범위에 의한 구 부착의 문제가 쉽게 해결된다. 이것을 본 논문에서는 내포문의 단문 분할이라고 한다. 본 논문에서 제안한 방법으로 432 문장을 실험한 결과 문형과 단문 분할을 이용하지 않은 방범보다 구문모호성이 87.73% 감소되었다.

  • PDF

Detection of Protein Subcellular Localization based on Syntactic Dependency Paths (구문 의존 경로에 기반한 단백질의 세포 내 위치 인식)

  • Kim, Mi-Young
    • The KIPS Transactions:PartB
    • /
    • v.15B no.4
    • /
    • pp.375-382
    • /
    • 2008
  • A protein's subcellular localization is considered an essential part of the description of its associated biomolecular phenomena. As the volume of biomolecular reports has increased, there has been a great deal of research on text mining to detect protein subcellular localization information in documents. It has been argued that linguistic information, especially syntactic information, is useful for identifying the subcellular localizations of proteins of interest. However, previous systems for detecting protein subcellular localization information used only shallow syntactic parsers, and showed poor performance. Thus, there remains a need to use a full syntactic parser and to apply deep linguistic knowledge to the analysis of text for protein subcellular localization information. In addition, we have attempted to use semantic information from the WordNet thesaurus. To improve performance in detecting protein subcellular localization information, this paper proposes a three-step method based on a full syntactic dependency parser and WordNet thesaurus. In the first step, we constructed syntactic dependency paths from each protein to its location candidate, and then converted the syntactic dependency paths into dependency trees. In the second step, we retrieved root information of the syntactic dependency trees. In the final step, we extracted syn-semantic patterns of protein subtrees and location subtrees. From the root and subtree nodes, we extracted syntactic category and syntactic direction as syntactic information, and synset offset of the WordNet thesaurus as semantic information. According to the root information and syn-semantic patterns of subtrees from the training data, we extracted (protein, localization) pairs from the test sentences. Even with no biomolecular knowledge, our method showed reasonable performance in experimental results using Medline abstract data. Our proposed method gave an F-measure of 74.53% for training data and 58.90% for test data, significantly outperforming previous methods, by 12-25%.

Korean Probabilistic Syntactic Model using Head Co-occurrence (중심어 간의 공기정보를 이용한 한국어 확률 구문분석 모델)

  • Lee, Kong-Joo;Kim, Jae-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.809-816
    • /
    • 2002
  • Since a natural language has inherently structural ambiguities, one of the difficulties of parsing is resolving the structural ambiguities. Recently, a probabilistic approach to tackle this disambiguation problem has received considerable attention because it has some attractions such as automatic learning, wide-coverage, and robustness. In this paper, we focus on Korean probabilistic parsing model using head co-occurrence. We are apt to meet the data sparseness problem when we're using head co-occurrence because it is lexical. Therefore, how to handle this problem is more important than others. To lighten the problem, we have used the restricted and simplified phrase-structure grammar and back-off model as smoothing. The proposed model has showed that the accuracy is about 84%.

CFG based Korean Parsing Using Sentence Patterns as Syntactic Constraint (구문 제약으로 문형을 사용하는 CFG기반의 한국어 파싱)

  • Park, In-Cheol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.4
    • /
    • pp.958-963
    • /
    • 2008
  • Korean language has different structural properties which are controlled by semantic constraints of verbs. Also, most of Korean sentences are complex sentences which consisted of main clause and embedded clause. Therefore it is difficult to describe appropriate syntactic grammar or constraint for the Korean language and the Korean parsing causes various syntactic ambiguities. In this paper, we suggest how to describe CFG-based grammar using sentence patterns as syntactic constraint and solve syntactic ambiguities. To solve this, we classified 44 sentence patterns including complex sentences which have subordinate clause in Korean sentences and used it to reduce syntactic ambiguity. However, it is difficult to solve every syntactic ambiguity using the information of sentence patterns. So, we used semantic markers with semantic constraint. Semantic markers can be used to solve ambiguity by auxiliary particle or comitative case particle.

Syntax-Directed Document Editor based XML DTD (XML DTD 기반의 구문지향 문서 작성기)

  • Kim, Young-Chul;Kim, Sung-Keun;Choi, Jong-Myung
    • The Journal of Korean Association of Computer Education
    • /
    • v.7 no.4
    • /
    • pp.67-75
    • /
    • 2004
  • XML is being accepted as a standard for the next generation web documents, as it enables to extend the document structures. However, general users have difficulties in writing valid and well-formed XML documents, since the documents should satisfy the grammatical constraints of XML. In this paper, we present a syntax-directed XML document editor which will ease users in writing valid XML documents. The editor will help users, and increase productivity in writing XML documents.

  • PDF

A Study on the Identification and Classification of Relation Between Biotechnology Terms Using Semantic Parse Tree Kernel (시맨틱 구문 트리 커널을 이용한 생명공학 분야 전문용어간 관계 식별 및 분류 연구)

  • Choi, Sung-Pil;Jeong, Chang-Hoo;Chun, Hong-Woo;Cho, Hyun-Yang
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.45 no.2
    • /
    • pp.251-275
    • /
    • 2011
  • In this paper, we propose a novel kernel called a semantic parse tree kernel that extends the parse tree kernel previously studied to extract protein-protein interactions(PPIs) and shown prominent results. Among the drawbacks of the existing parse tree kernel is that it could degenerate the overall performance of PPI extraction because the kernel function may produce lower kernel values of two sentences than the actual analogy between them due to the simple comparison mechanisms handling only the superficial aspects of the constituting words. The new kernel can compute the lexical semantic similarity as well as the syntactic analogy between two parse trees of target sentences. In order to calculate the lexical semantic similarity, it incorporates context-based word sense disambiguation producing synsets in WordNet as its outputs, which, in turn, can be transformed into more general ones. In experiments, we introduced two new parameters: tree kernel decay factors, and degrees of abstracting lexical concepts which can accelerate the optimization of PPI extraction performance in addition to the conventional SVM's regularization factor. Through these multi-strategic experiments, we confirmed the pivotal role of the newly applied parameters. Additionally, the experimental results showed that semantic parse tree kernel is superior to the conventional kernels especially in the PPI classification tasks.

Case Particle Restoration as Preprocessing for Syntactic Analysis (격조사 복원: 구문분석 전처리)

  • Seo, Hyeong-Won;Kwon, Hong-Seok;Kim, Jae-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2012.10a
    • /
    • pp.3-7
    • /
    • 2012
  • 본 논문은 구문분석의 전처리로서 생략된 한국어 격조사의 복원 방법을 제안한다. 격조사 생략은 체언과 용언 사이의 관계가 아주 밀접하여 생략하여도 의사 전달에 문제가 없을 경우에 자주 발생한다. 이렇게 생략된 조사는 구문분석의 복잡도를 크게 높일 뿐 아니라 구문 분석의 오류의 원인이 되기도 한다. 본 논문에서는 구문구조 부착 말뭉치를 분석하여 생략된 조사는 그 체언과 용언 사이의 거리가 매우 가깝다는 사실을 발견하였고 이 성질을 이용해서 기계학습 방법을 이용해서 생략된 조사를 복원하는 방법을 제안한다. 본 논문에서는 ETRI 구문구조 부착 말뭉치를 이용해서 실험한 결과, 생략된 조사의 81%를 정확하게 복원할 수 있었다.

  • PDF