• Title/Summary/Keyword: 보편 의존 구문 분석

Search Result 3, Processing Time 0.015 seconds

A Study on Universal Dependency Annotation for Korean (한국어 보편 의존 구문 분석(Universal Dependencies) 방법론 연구)

  • 이찬영;오태환;김한샘
    • Language Facts and Perspectives
    • /
    • v.47
    • /
    • pp.141-175
    • /
    • 2019
  • The purpose of this paper is to find a methodology for parsing Korean by the method of 'Universal Dependencies'. Universal Dependencies expresses the annotation marking system that can be applied to the formal patterns and syntactic relationships of various types of languages. The UD project presents a framework for consistently annotated multi-lingual treebank corpus for various languages around the world through the 'CoNLL shared task' and is currently applied to more than 50 languages. This flow has a great influence on research related to not only the UD project itself but also the syntactic annotation for individual languages, and efforts are being made to apply the universally dependent syntax analysis marking system in the UD according to the characteristic of the individual language. When annotation labels cannot be applied due to the nature of the Korean language in consideration of compatibility, they need to be excluded. Among DEPREL labels, several labels are difficult to apply in Korean when looking at the specific items. In this paper,, DEPREL overall annotation system of UD was translated and a representative category of elements that needed to be transformed was selected to discuss the actuality of linguistic convergence and its application.

A Study of Disfluency Processing for Dependency Parsing of Spoken (구어 의존 구문 분석을 위한 비유창성 처리 연구)

  • Park, Seokwon;Choe, Hyonsu;Han, Jiyoon;Oh, Taehwan;Ahn, Euijeong;Kim, Hansaem
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.144-148
    • /
    • 2019
  • 비유창성(disfluency)은 문어와 같이 정연한 구조로 말하지 못하는 현상 전반을 지칭한다. 이는 구어에서 보편적으로 발생하는 현상으로 구어 의존 구문 분석의 난이도를 상향시키는 요인이다. 본 연구에서는 비유창성 요소 유형을 담화 표지, 수정 표현, 반복 표현, 삽입 표현으로 분류하였다. 또한 유형별 비유창성 요소를 실제 말뭉치에서 어떻게 구문 주석할 것인지를 제안한다. 이와 같은 구어 데이터 처리 방식은 대화시스템 등 구어를 처리해야 하는 도메인에서의 자연언어이해 성능 향상에 기여할 것이다.

  • PDF

A Multi-task Self-attention Model Using Pre-trained Language Models on Universal Dependency Annotations

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.39-46
    • /
    • 2022
  • In this paper, we propose a multi-task model that can simultaneously predict general-purpose tasks such as part-of-speech tagging, lemmatization, and dependency parsing using the UD Korean Kaist v2.3 corpus. The proposed model thus applies the self-attention technique of the BERT model and the graph-based Biaffine attention technique by fine-tuning the multilingual BERT and the two Korean-specific BERTs such as KR-BERT and KoBERT. The performances of the proposed model are compared and analyzed using the multilingual version of BERT and the two Korean-specific BERT language models.