• Title/Summary/Keyword: Dependency-based parsing

Search Result 32, Processing Time 0.022 seconds

Automatic Acquisition of Lexical-Functional Grammar Resources from a Japanese Dependency Corpus

  • Oya, Masanori;Genabith, Josef Van
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.375-384
    • /
    • 2007
  • This paper describes a method for automatic acquisition of wide-coverage treebank-based deep linguistic resources for Japanese, as part of a project on treebank-based induction of multilingual resources in the framework of Lexical-Functional Grammar (LFG). We automatically annotate LFG f-structure functional equations (i.e. labelled dependencies) to the Kyoto Text Corpus version 4.0 (KTC4) (Kurohashi and Nagao 1997) and the output of of Kurohashi-Nagao Parser (KNP) (Kurohashi and Nagao 1998), a dependency parser for Japanese. The original KTC4 and KNP provide unlabelled dependencies. Our method also includes zero pronoun identification. The performance of the f-structure annotation algorithm with zero-pronoun identification for KTC4 is evaluated against a manually-corrected Gold Standard of 500 sentences randomly chosen from KTC4 and results in a pred-only dependency f-score of 94.72%. The parsing experiments on KNP output yield a pred-only dependency f-score of 82.08%.

  • PDF

Using Syntax and Shallow Semantic Analysis for Vietnamese Question Generation

  • Phuoc Tran;Duy Khanh Nguyen;Tram Tran;Bay Vo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.10
    • /
    • pp.2718-2731
    • /
    • 2023
  • This paper presents a method of using syntax and shallow semantic analysis for Vietnamese question generation (QG). Specifically, our proposed technique concentrates on investigating both the syntactic and shallow semantic structure of each sentence. The main goal of our method is to generate questions from a single sentence. These generated questions are known as factoid questions which require short, fact-based answers. In general, syntax-based analysis is one of the most popular approaches within the QG field, but it requires linguistic expert knowledge as well as a deep understanding of syntax rules in the Vietnamese language. It is thus considered a high-cost and inefficient solution due to the requirement of significant human effort to achieve qualified syntax rules. To deal with this problem, we collected the syntax rules in Vietnamese from a Vietnamese language textbook. Moreover, we also used different natural language processing (NLP) techniques to analyze Vietnamese shallow syntax and semantics for the QG task. These techniques include: sentence segmentation, word segmentation, part of speech, chunking, dependency parsing, and named entity recognition. We used human evaluation to assess the credibility of our model, which means we manually generated questions from the corpus, and then compared them with the generated questions. The empirical evidence demonstrates that our proposed technique has significant performance, in which the generated questions are very similar to those which are created by humans.

Improving Parsing Efficiency Using Chunking in Chinese-Korean Machine Translation (중한번역에서 구 묶음을 이용한 파싱 효율 개선)

  • 양재형;심광섭
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.8
    • /
    • pp.1083-1091
    • /
    • 2004
  • This paper presents a chunking system employed as a preprocessing module to the parser in a Chinese to Korean machine translation system. The parser can benefit from the dependency information provided by the chunking module. The chunking system was implemented using transformation-based learning technique and an effective interface that conveys the dependency information to the parser was also devised. The module was integrated into the machine translation system and experiments were performed with corpuses collected from Chinese websites. The experimental results show the introduction of chunking module provides noticeable improvements in the parser's performance.

Determining the Dependency among Clauses based on SVM (SVM을 이용한 절-절 간의 의존관계 설정)

  • Kim, Mi-Young
    • The KIPS Transactions:PartB
    • /
    • v.14B no.2
    • /
    • pp.141-144
    • /
    • 2007
  • The longer the input sentences, the worse the syntactic parsing results, Therefore, a long sentence is first divided into several clauses and syntactic analysis for each clause is performed. Finally, all the analysis results art merged into one, In the merging process, it is difficult to determine the dependency among clauses, To handle such syntactic ambiguity among clauses, this paper proposes an SVM-based clause-dependency determination method. We extract various features from clauses, and analyze the effect of each feature on the performance. We also compare the performance of our proposed method with those of previous methods.

A Token Based Transfer Driven Koran -Japanese Machine Translation for Translating the Spoken Sentences (대화체 문장 번역을 위한 토큰기반 변환중심 한일 기계번역)

  • 양승원
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.4 no.4
    • /
    • pp.40-46
    • /
    • 1999
  • This paper introduce a Koran-Japanese machine translation system which is a module in the spoken language interpreting system It is implemented based on the TDMT(Transfre Driven Machine Translation). We define a new unit of translation so called TOKEN. The TOKEN-based translation method resolves nonstructural feature in Korean sentences and increases the quaity of translating results. In our system, we get rid of useless effort for traditional parsing by performing semi-parsing. The semi-parser makes the dependency tree which has minimum information needed generating module. We constructed the generation dictionaries by using the corpus obtained from ETRI spoken language database. Our system was tested with 600 utterances which is collected from travel planning domain The success-ratio of our system is 87% on restricted testing environment and 71% on unrestricted testing environment.

  • PDF

A Multi-task Self-attention Model Using Pre-trained Language Models on Universal Dependency Annotations

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.39-46
    • /
    • 2022
  • In this paper, we propose a multi-task model that can simultaneously predict general-purpose tasks such as part-of-speech tagging, lemmatization, and dependency parsing using the UD Korean Kaist v2.3 corpus. The proposed model thus applies the self-attention technique of the BERT model and the graph-based Biaffine attention technique by fine-tuning the multilingual BERT and the two Korean-specific BERTs such as KR-BERT and KoBERT. The performances of the proposed model are compared and analyzed using the multilingual version of BERT and the two Korean-specific BERT language models.

A Distance Approach for Open Information Extraction Based on Word Vector

  • Liu, Peiqian;Wang, Xiaojie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2470-2491
    • /
    • 2018
  • Web-scale open information extraction (Open IE) plays an important role in NLP tasks like acquiring common-sense knowledge, learning selectional preferences and automatic text understanding. A large number of Open IE approaches have been proposed in the last decade, and the majority of these approaches are based on supervised learning or dependency parsing. In this paper, we present a novel method for web scale open information extraction, which employs cosine distance based on Google word vector as the confidence score of the extraction. The proposed method is a purely unsupervised learning algorithm without requiring any hand-labeled training data or dependency parse features. We also present the mathematically rigorous proof for the new method with Bayes Inference and Artificial Neural Network theory. It turns out that the proposed algorithm is equivalent to Maximum Likelihood Estimation of the joint probability distribution over the elements of the candidate extraction. The proof itself also theoretically suggests a typical usage of word vector for other NLP tasks. Experiments show that the distance-based method leads to further improvements over the newly presented Open IE systems on three benchmark datasets, in terms of effectiveness and efficiency.

Syntactic Analysis based on Subject-Clause Segmentation (S-절 분할을 통한 구문 분석)

  • Kim Mi-Young;Lee Jong-Hyeok
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.9
    • /
    • pp.936-947
    • /
    • 2005
  • In dependency parsing of long sentences with fewer subjects than predicates, it is difficult to recognize which predicate governs which subject. To handle such syntactic ambiguity between subjects and predicates, this paper proposes an 'S-clause' segmentation method, where an S(ubject)-clause is defined as a group of words containing several predicates and their common subject. We propose an automatic S -clause segmentation method using decision trees. The S-clause information was shown to be very effective in analyzing long sentences, with an improved parsing performance of 5 percent. In addition, the performance in detecting the governor of subjects was improved by $32\%$.

CR-M-SpanBERT: Multiple embedding-based DNN coreference resolution using self-attention SpanBERT

  • Joon-young Jung
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.35-47
    • /
    • 2024
  • This study introduces CR-M-SpanBERT, a coreference resolution (CR) model that utilizes multiple embedding-based span bidirectional encoder representations from transformers, for antecedent recognition in natural language (NL) text. Information extraction studies aimed to extract knowledge from NL text autonomously and cost-effectively. However, the extracted information may not represent knowledge accurately owing to the presence of ambiguous entities. Therefore, we propose a CR model that identifies mentions referring to the same entity in NL text. In the case of CR, it is necessary to understand both the syntax and semantics of the NL text simultaneously. Therefore, multiple embeddings are generated for CR, which can include syntactic and semantic information for each word. We evaluate the effectiveness of CR-M-SpanBERT by comparing it to a model that uses SpanBERT as the language model in CR studies. The results demonstrate that our proposed deep neural network model achieves high-recognition accuracy for extracting antecedents from NL text. Additionally, it requires fewer epochs to achieve an average F1 accuracy greater than 75% compared with the conventional SpanBERT approach.

A Transition based Joint Model for Korean POS Tagging & Dependency Parsing using Deep Learning (딥러닝을 이용한 전이 기반 한국어 품사 태깅 & 의존 파싱 통합 모델)

  • Min, Jin-Woo;Na, Seung-Hoon;Sin, Jong-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.97-102
    • /
    • 2017
  • 형태소 분석과 의존 파싱은 자연어 처리 분야에서 핵심적인 역할을 수행하고 있다. 이러한 핵심적인 역할을 수행하는 형태소 분석과 의존 파싱에 대해 일괄적으로 학습하는 통합 모델에 대한 필요성이 대두 되었고 이에 대한 많은 연구들이 수행되었다. 기존의 형태소 분석 & 의존 파싱 통합 모델은 먼저 형태소 분석 및 품사 태깅에 대한 학습을 수행한 후 이어서 의존 파싱 모델을 학습하는 파이프라인 방식으로 진행되었다. 이러한 방식의 학습을 두 번 연이어 진행하기 때문에 시간이 오래 걸리고 또한 형태소 분석과 파싱이 서로 영향을 주지 못하는 단점이 존재하였다. 본 논문에서는 의존 파싱에서 형태소 분석에 대한 전이 액션을 포함하도록 전이 액션을 확장하여 한국어 형태소 분석 & 의존파싱에 대한 통합모델을 제안하였고 성능 측정 결과 세종 형태소 분석 데이터 셋에서 F1 97.63%, SPMRL '14 한국어 의존 파싱 데이터 셋에서 UAS 90.48%, LAS 88.87%의 성능을 보여주어 기존의 의존 파싱 성능을 더욱 향상시켰다.

  • PDF