• Title/Summary/Keyword: Dependency Grammar

Search Result 22, Processing Time 0.031 seconds

Korean Probabilistic Dependency Grammar Induction by morpheme (형태소 단위의 한국어 확률 의존문법 학습)

  • Choi, Seon-Hwa;Park, Hyuk-Ro
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.791-798
    • /
    • 2002
  • In this thesis. we present a new method for inducing a probabilistic dependency grammar (PDG) from text corpus. As words in Korean are composed of a set of more basic morphemes, there exist various dependency relations in a word. So, if the induction process does not take into account of these in-word dependency relations, the accuracy of the resulting grammar nay be poor. In comparison with previous PDG induction methods. the main difference of the proposed method lies in the fact that the method takes into account in-word dependency relations as well as inter-word dependency relations. To access the performance of the proposed method, we conducted an experiment using a manually-tagged corpus of 25,000 sentences which is complied by Korean Advanced Institute of Science and Technology (KAIST). The grammar induction produced 2,349 dependency rules. The parser with these dependency rules shoved 69.77% accuracy in terms of the number of correct dependency relations relative to the total number dependency relations for best-1 parse trees of sample sentences. The result shows that taking into account in-word dependency relations in the course of grammar induction results in a more accurate dependency grammar.

Dependency Grammar and the Parsing of Chinese Sentences

  • Lai, Bong-Ycung-Tom;Huang, Changning
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 1994.02a
    • /
    • pp.63-72
    • /
    • 1994
  • Dependency Grammar has been used by Iinguists as the basis of the syntactic components of their grammar formalisms. It has also been used in natural langauge parsing. In China, attempts have been made to use this grammar formalism to parse Chinese sentences using corpus based techniques. This paper reviews the properties of Dependency Grammar as embodied in four axioms for the well-formedness conditions for dependency structures. It is shown that allowing mul tiple governors as done by some followers of this formalism is unnecessary. The practice of augmenting Dependency Grammar with functional labels is discussed in the light of building functional structures when the sentence is parsed. This will also facilitate semantic interpretion.retion.

  • PDF

Korean Parser Using Segmentation Based on Dependency Grammar (의존문법 기반의 구간 분할법을 활용한 한국어 구문 분석기)

  • Park, Yong-Uk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.8
    • /
    • pp.1705-1712
    • /
    • 2009
  • Recently, most Korean syntactic analysis systems use Dependency Grammar, because it is quite good to analysis of Korean language structures. But Dependency Grammar makes many ambiguities during syntax analysis of Korean. We implement a system which decreases many ambiguities in syntax analysis. To decrease ambiguities we suggest several methods. First, we use about 200 dependency rules, second, we suggest a new segmentation method and third, one predicate can not have more than one subject or object. Using these methods, we can reduce many ambiguities in Korean syntactic analysis.

Lexical Ambiguity Resolution System of Korean Language using Dependency Grammar and Collative Semantics (의존 문법과 대조 의미론을 이용한 한국어의 어휘적 중의성 해결 시스템)

  • 윤근수;권혁철
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.1
    • /
    • pp.1-24
    • /
    • 1991
  • This paper presents the Lexical Ambiguity Resolution System of Korean Language. This system uses Dependency grammar and Collative Semantics. Dependency grammar is used to analyze Korean syntactic dependency. A robust way to analyze a sentence is to establish links between individual words. Collative Semantics investigates the interplay between lexical ambiguity and semantics relations. Collative Semantics consists of sense-frame, semantic vector, collation, and screening. Our system was implemented by C programming language. This system analyzes sentences, discriminates the kinds of semantic relation between pairs of words senses in those sentences, and resolves lexical ambiguity.

Automatic Acquisition of Lexical-Functional Grammar Resources from a Japanese Dependency Corpus

  • Oya, Masanori;Genabith, Josef Van
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.375-384
    • /
    • 2007
  • This paper describes a method for automatic acquisition of wide-coverage treebank-based deep linguistic resources for Japanese, as part of a project on treebank-based induction of multilingual resources in the framework of Lexical-Functional Grammar (LFG). We automatically annotate LFG f-structure functional equations (i.e. labelled dependencies) to the Kyoto Text Corpus version 4.0 (KTC4) (Kurohashi and Nagao 1997) and the output of of Kurohashi-Nagao Parser (KNP) (Kurohashi and Nagao 1998), a dependency parser for Japanese. The original KTC4 and KNP provide unlabelled dependencies. Our method also includes zero pronoun identification. The performance of the f-structure annotation algorithm with zero-pronoun identification for KTC4 is evaluated against a manually-corrected Gold Standard of 500 sentences randomly chosen from KTC4 and results in a pred-only dependency f-score of 94.72%. The parsing experiments on KNP output yield a pred-only dependency f-score of 82.08%.

  • PDF

Korean Syntactic Rules using Composite Labels (복합 레이블을 적용한 한국어 구문 규칙)

  • 김성용;이공주;최기선
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.235-244
    • /
    • 2004
  • We propose a format of a binary phrase structure grammar with composite labels. The grammar adopts binary rules so that the dependency between two sub-trees can be represented in the label of the tree. The label of a tree is composed of two attributes, each of which is extracted from each sub-tree so that it can represent the compositional information of the tree. The composite label is generated from part-of-speech tags using an automatic labeling algorithm. Since the proposed rule description scheme is binary and uses only part-of-speech information, it can readily be used in dependency grammar and be applied to other languages as well. In the best-1 context-free cross validation on 31,080 tree-tagged corpus, the labeled precision is 79.30%, which outperforms phrase structure grammar and dependency grammar by 5% and by 4%, respectively. It shows that the proposed rule description scheme is effective for parsing Korean.

A Clustering Method using Dependency Structure and Part-Of-Speech(POS) for Japanese-English Statistical Machine Translation (일영 통계기계번역에서 의존문법 문장 구조와 품사 정보를 사용한 클러스터링 기법)

  • Kim, Han-Kyong;Na, Hwi-Dong;Lee, Jin-Ji;Lee, Jong-Hyeok
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.993-997
    • /
    • 2009
  • Clustering is well known method and that can be used in statistical machine translation. In this paper we propose a corpus clustering method using syntactic structure and POS information of dependency grammar. And using this cluster language model as additional feature to phrased-based statistical machine translation system to improve translation Quality.

A Design & Implementation of Korean Parser using Subcategorization: I (하위범주화에 의한 한국어 파서의 설계와 구현 : I)

  • Lee, Ho Suk
    • Annual Conference on Human and Language Technology
    • /
    • 2008.10a
    • /
    • pp.1-4
    • /
    • 2008
  • We present and discuss a Korean language parser based on dependency grammar, subcategorization, and the analysis of viable postfix such as josa and omi. We employ an extended form of BNF(Backus Naur Form) to define the dependency grammar and the form of subcategorization. We present the conceptual form of Korean language parser in a C program style. We discuss the structure of Korean parser currently implemented and show the execution results.

  • PDF

DG-based SPO tuple recognition using self-attention M-Bi-LSTM

  • Jung, Joon-young
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.438-449
    • /
    • 2022
  • This study proposes a dependency grammar-based self-attention multilayered bidirectional long short-term memory (DG-M-Bi-LSTM) model for subject-predicate-object (SPO) tuple recognition from natural language (NL) sentences. To add recent knowledge to the knowledge base autonomously, it is essential to extract knowledge from numerous NL data. Therefore, this study proposes a high-accuracy SPO tuple recognition model that requires a small amount of learning data to extract knowledge from NL sentences. The accuracy of SPO tuple recognition using DG-M-Bi-LSTM is compared with that using NL-based self-attention multilayered bidirectional LSTM, DG-based bidirectional encoder representations from transformers (BERT), and NL-based BERT to evaluate its effectiveness. The DG-M-Bi-LSTM model achieves the best results in terms of recognition accuracy for extracting SPO tuples from NL sentences even if it has fewer deep neural network (DNN) parameters than BERT. In particular, its accuracy is better than that of BERT when the learning data are limited. Additionally, its pretrained DNN parameters can be applied to other domains because it learns the structural relations in NL sentences.

An unsupervised learning of dependency grammar Using inside-outside probability (내부 및 외부 확률을 이용한 의존문법의 비통제 학습)

  • 장두성;최기선
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2000.06a
    • /
    • pp.133-137
    • /
    • 2000
  • 구문태그가 부착되지 않은 코퍼스를 사용하여 문법규칙의 확률을 훈련하는 비통제 학습(unsupervised learning) 방법의 대표적인 것이 CNF(Chomsky Normal Form)의 CFG(Context Free Grammar)를 입력으로 하는 inside-outside 알고리즘이다. 본 연구에서는 의존문법을 CNF로 변환하는 기법에 대해 논하고 의존문법을 위해 변형된 inside-outside 알고리즘을 논한다. 또한 이 알고리즘을 사용하여 실제 훈련한 결과를 보이고, 의존규칙과 구문구조 확률을 같이 사용하는 hybrid방식 구문분석기에 적용한 결과를 보인다.

  • PDF