• Title/Summary/Keyword: zero anaphora

Search Result 14, Processing Time 0.024 seconds

Optimality Theory in Semantics and the Anaphora Resolution in Korean: An Adumbration

  • Hong, Min-Pyo
    • Language and Information
    • /
    • v.6 no.2
    • /
    • pp.129-152
    • /
    • 2002
  • This paper argues for a need to adopt a conceptually radical approach to zero anaphora resolution in Korean. It is shown that a number of apparently conflicting constraints, mostly motivated by lexical, syntactic, semantic, and pragmatic factors, are involved in determining the referential identity of zero pronouns in Korean. It is also argued that some of the major concepts of Optimality Theory can provide a good theoretical framework to predict the antecedents to zero pronouns in general. A partial formalization of 07-based constraints at the morpho-syntactic and lexico-semantical level is provided. It is argued that the lexico-semantic restrictions on adjacent expressions play the most important role in the anaphora resolution process along with a variant of the binding principle, formulated in semantic terms. Other pragmatically motivated constraints that incorporate some important intuitions of Centering Theory are proposed too.

  • PDF

Anaphoricity Determination of Zero Pronouns for Intra-sentential Zero Anaphora Resolution (문장 내 영 조응어 해석을 위한 영대명사의 조응성 결정)

  • Kim, Kye-Sung;Park, Seong-Bae;Park, Se-Young;Lee, Sang-Jo
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.12
    • /
    • pp.928-935
    • /
    • 2010
  • Identifying the referents of omitted elements in a text is an important task to many natural language processing applications such as machine translation, information extraction and so on. These omitted elements are often called zero anaphors or zero pronouns, and are regarded as one of the most common forms of reference. However, since all zero elements do not refer to explicit objects which occur in the same text, recent work on zero anaphora resolution have attempted to identify the anaphoricity of zero pronouns. This paper focuses on intra-sentential anaphoricity determination of subject zero pronouns that frequently occur in Korean. Unlike previous studies on pair-wise comparisons, this study attempts to determine the intra-sentential anaphoricity of zero pronouns by learning directly the structure of clauses in which either non-anaphoric or inter-sentential subject zero pronouns occur. The proposed method outperforms baseline methods, and anaphoricity determination of zero pronouns will play an important role in resolving zero anaphora.

Splitting Algorithms and Recovery Rules for Zero Anaphora Resolution in Korean Complex Sentences (한국어 복합문에서의 제로 대용어 처리를 위한 분해 알고리즘과 복원규칙)

  • Kim, Mi-Jin;Park, Mi-Sung;Koo, Sang-Ok;Kang, Bo-Yeong;Lee, Sang-Jo
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.10
    • /
    • pp.736-746
    • /
    • 2002
  • Zero anaphora occurs frequently in Korean complex sentences, and it makes the interpretation of sentences difficult. This paper proposes splitting algorithms and zero anaphora recovery rules for the purpose of handling zero anaphora, and also presents a resolution methodology. The paper covers quotations, conjunctive sentences and embedded sentences out of the complex sentences shown in the newspaper articles, with an exclusion of embedded sentences of auxiliary verb. We manage the quotations using the equivalent noun phrase deletion rule according to subject person constraint, the nominalized embedded sentences using the equivalent noun phrase deletion rule, the adnominal embedded sentences using the relative noun phrase deletion rule and the conjunctive sentences using the conjunction reduction rule in reverse. The classified table of the endings which relate to a formation of the complex sentences is used for splitting the complex sentences, and the syntactic rules, applied when being omitted, are used in reverse for recovering zero anaphora. The presented rule showed the result of 83.53% in perfect resolution and 11.52% in partial resolution.

Restoring Encyclopedia Title Words Using a Zero Anaphora Resolution Technique (무형대용어 해결 기술을 이용한 백과사전 표제어 복원)

  • Hwang, Min-Kook;Kim, Young-Tae;Ra, Dongyul;Lim, Soojong
    • Annual Conference on Human and Language Technology
    • /
    • 2014.10a
    • /
    • pp.65-69
    • /
    • 2014
  • 한국어 문장의 경우 문맥상 추론이 가능하다면 용언의 격이 생략되는 현상 즉 무형대용어 (zero anaphora) 현상이 흔히 발생한다. 무형대용어를 채울 수 있는 선행어 (명사구)를 찾는 문제는 대용어 해결 (anaphora resolution) 문제와 같은 성격의 문제이다. 이러한 생략현상은 백과사전이나 위키피디아 등 백과사전류 문서에서도 자주 발생한다. 특히 선행어로 표제어가 가능한 경우 무형대용어 현상이 빈번히 발생한다. 백과사전류 문서는 질의응답 (QA) 시스템의 정답 추출 정보원으로 많이 이용되는데 생략된 표제어의 복원이 없다면 유용한 정보를 제공하기 어렵다. 본 논문에서는 생략된 표제어 복원을 위해 무형대용어의 해결을 기반으로 하는 시스템을 제안한다.

  • PDF

Zero-anaphora resolution in Korean based on deep language representation model: BERT

  • Kim, Youngtae;Ra, Dongyul;Lim, Soojong
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.299-312
    • /
    • 2021
  • It is necessary to achieve high performance in the task of zero anaphora resolution (ZAR) for completely understanding the texts in Korean, Japanese, Chinese, and various other languages. Deep-learning-based models are being employed for building ZAR systems, owing to the success of deep learning in the recent years. However, the objective of building a high-quality ZAR system is far from being achieved even using these models. To enhance the current ZAR techniques, we fine-tuned a pretrained bidirectional encoder representations from transformers (BERT). Notably, BERT is a general language representation model that enables systems to utilize deep bidirectional contextual information in a natural language text. It extensively exploits the attention mechanism based upon the sequence-transduction model Transformer. In our model, classification is simultaneously performed for all the words in the input word sequence to decide whether each word can be an antecedent. We seek end-to-end learning by disallowing any use of hand-crafted or dependency-parsing features. Experimental results show that compared with other models, our approach can significantly improve the performance of ZAR.

Building a Korean Zero-Anaphora Detection and Resolution Corpus in Korean Discourse Using UWordMap (담화에서의 어휘지도를 이용한 한국어 무형대용어 탐지 및 해결 말뭉치 생성)

  • Yoon, Ho;Namgoong, Young;Park, Hyuk-Ro;Kim, Jae-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.591-594
    • /
    • 2020
  • 담화에서 의미를 전달하는 데 문제가 없을 경우에는 문장성분을 생략하여 표현한다. 생략된 문장성분을 무형대용어(zero anaphora)라고 한다. 무형대용어를 복원하기 위해서는 무형대용어 탐지와 무형대용어 해결이 필요하다. 무형대용어 탐지란 문장 내에서 생략된 필수성분을 찾는 것이고, 무형대용어 해결이란 무형대용어에 알맞은 문장성분을 찾아내는 것이다. 본 논문에서는 담화에서의 무형대용어 탐지 및 해결을 위한 말뭉치 생성 방법을 제안한다. 먼저 기존의 세종 구어 말뭉치에서 어휘지도를 이용하여 무형대용어를 복원한다. 이를 위해 본 논문에서는 동형이의어 부착과 어휘지도를 이용해서 무형대용어를 복원하고 복원된 무형대용어에 대한 오류를 수정하고 그 선행어(antecedent)를 수동으로 결정함으로써 무형대용어 해결 말뭉치를 생성한다. 총 58,896 문장에서 126,720개의 무형대용어를 복원하였으며, 약 90%의 정확률을 보였다. 앞으로 심층학습 등의 방법을 활용하여 성능을 개선할 계획이다.

  • PDF

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

Zero Anaphora Resolution in Korean Complex Sentences (한국어 복합문의 영 대용어 해결)

  • 김미진;강보영;구상옥;박미성;이상조
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10d
    • /
    • pp.694-696
    • /
    • 2002
  • 본 논문은 한국어 복합문에서의 영 대용어 해결을 위해 복합문 분해 알고리즘과 영 대용어 복원규칙을 제안하고, 해결 방법을 제시한다. 복합문 분해를 위해서는 복합문 구성에 관여하는 활용 어미들을 이용하고, 영 대용어 복원을 위해서는 생략될 때 적용된 통사규칙을 역으로 이용한다. 제안한 방법을 이용한 결과 전체 영 대용어 중 83.53%가 해결 가능하며 11.52%는 부분적으로 해결 가능하다.

  • PDF

Centering Theory and Argument Deletion in Spoken Korean (센터링 이론과 대화체에서의 논항 생략 현상)

  • 홍민표
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.1
    • /
    • pp.9-24
    • /
    • 2000
  • This paper analyzes the distribution and classification of unrealized arguments of a predicate often called zero pronouns. in spoken Korean. Based on the transcript of a one-hour-Iong dialogue. recorded from public radio stations. I present the statistical data on argument ellipsis in Korean with respect to the frequency of zero ronouns as well as the nature of their antecedents. I go further to review some of the previous efforts to identify the discourse- theoretic functions of zero-pronouns in the framework of Centering Theory. and propose that the zero-pronouns in spoken Korean be divided into center-insensitive vs. center-sensitive classes. I also point out a couple of language-particular idiosyncrasies found in Korean, such as morpho-syntactic elements and encyclopaedic knowledge. that interact with center management in on-going discourse and often lead to difficulties in applying the centering rules and constraints to Korean.

  • PDF

Korean Zero Anaphora Resolution Guidelines (한국어 생략어복원 가이드라인)

  • Ryu, Jihee;Lim, Joon-Ho;Lim, Soojong;Kim, Hyunki
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.213-219
    • /
    • 2017
  • 말과 글에서 유추가 가능한 정보에 대해서는 사람들이 일반적으로 생략해서 표현하는 경우를 볼 수 있다. 사람들은 생략된 정보를 문맥적으로 유추하여 이해하는 것이 어렵지 않지만, 컴퓨터의 경우 생략된 정보를 고려하지 못해 주어진 정보를 완전하게 이해하지 못하는 문제를 낳게 된다. 우리는 이러한 문제를 생략어복원을 통해 해결할 수 있다고 여기면서 본 논문을 통해 한국어 생략어복원에 대해 정의하고 기술 개발에 필요한 말뭉치 구축 시의 생략어복원 대상 및 태깅 사례를 포함하는 가이드라인을 제안한다. 또한 본 가이드라인에 의한 말뭉치 구축 및 기술 개발을 통해서 엑소브레인과 같은 한국어 질의응답 시스템의 품질 향상에 기여하는 것이 본 연구의 궁극적인 목적이다.

  • PDF