• Title/Summary/Keyword: sequence labeling problem

Search Result 15, Processing Time 0.021 seconds

Korean Semantic Role Labeling Using Structured SVM (Structural SVM 기반의 한국어 의미역 결정)

  • Lee, Changki;Lim, Soojong;Kim, Hyunki
    • Journal of KIISE
    • /
    • v.42 no.2
    • /
    • pp.220-226
    • /
    • 2015
  • Semantic role labeling (SRL) systems determine the semantic role labels of the arguments of predicates in natural language text. An SRL system usually needs to perform four tasks in sequence: Predicate Identification (PI), Predicate Classification (PC), Argument Identification (AI), and Argument Classification (AC). In this paper, we use the Korean Propbank to develop our Korean semantic role labeling system. We describe our Korean semantic role labeling system that uses sequence labeling with structured Support Vector Machine (SVM). The results of our experiments on the Korean Propbank dataset reveal that our method obtains a 97.13% F1 score on Predicate Identification and Classification (PIC), and a 76.96% F1 score on Argument Identification and Classification (AIC).

The Sequence Labeling Approach for Text Alignment of Plagiarism Detection

  • Kong, Leilei;Han, Zhongyuan;Qi, Haoliang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.9
    • /
    • pp.4814-4832
    • /
    • 2019
  • Plagiarism detection is increasingly exploiting text alignment. Text alignment involves extracting the plagiarism passages in a pair of the suspicious document and its source document. The heuristics have achieved excellent performance in text alignment. However, the further improvements of the heuristic methods mainly depends more on the experiences of experts, which makes the heuristics lack of the abilities for continuous improvements. To address this problem, machine learning maybe a proper way. Considering the position relations and the context of text segments pairs, we formalize the text alignment task as a problem of sequence labeling, improving the current methods at the model level. Especially, this paper proposes to use the probabilistic graphical model to tag the observed sequence of pairs of text segments. Hence we present the sequence labeling approach for text alignment in plagiarism detection based on Conditional Random Fields. The proposed approach is evaluated on the PAN@CLEF 2012 artificial high obfuscation plagiarism corpus and the simulated paraphrase plagiarism corpus, and compared with the methods achieved the best performance in PAN@CLEF 2012, 2013 and 2014. Experimental results demonstrate that the proposed approach significantly outperforms the state of the art methods.

Korean Semantic Role Labeling using Input-feeding RNN Search Model with CopyNet (Input-feeding RNN Search 모델과 CopyNet을 이용한 한국어 의미역 결정)

  • Bae, Jangseong;Lee, Changki
    • 한국어정보학회:학술대회논문집
    • /
    • 2016.10a
    • /
    • pp.300-304
    • /
    • 2016
  • 본 논문에서는 한국어 의미역 결정을 순차열 분류 문제(Sequence Labeling Problem)가 아닌 순차열 변환 문제(Sequence-to-Sequence Learning)로 접근하였고, 구문 분석 단계와 자질 설계가 필요 없는 End-to-end 방식으로 연구를 진행하였다. 음절 단위의 RNN Search 모델을 사용하여 음절 단위로 입력된 문장을 의미역이 달린 어절들로 변환하였다. 또한 순차열 변환 문제의 성능을 높이기 위해 연구된 인풋-피딩(Input-feeding) 기술과 카피넷(CopyNet) 기술을 한국어 의미역 결정에 적용하였다. 실험 결과, Korean PropBank 데이터에서 79.42%의 레이블 단위 f1-score, 71.58%의 어절 단위 f1-score를 보였다.

  • PDF

Korean Semantic Role Labeling using Input-feeding RNN Search Model with CopyNet (Input-feeding RNN Search 모델과 CopyNet을 이용한 한국어 의미역 결정)

  • Bae, Jangseong;Lee, Changki
    • Annual Conference on Human and Language Technology
    • /
    • 2016.10a
    • /
    • pp.300-304
    • /
    • 2016
  • 본 논문에서는 한국어 의미역 결정을 순차열 분류 문제(Sequence Labeling Problem)가 아닌 순차열 변환 문제(Sequence-to-Sequence Learning)로 접근하였고, 구문 분석 단계와 자질 설계가 필요 없는 End-to-end 방식으로 연구를 진행하였다. 음절 단위의 RNN Search 모델을 사용하여 음절 단위로 입력된 문장을 의미역이 달린 어절들로 변환하였다. 또한 순차열 변환 문제의 성능을 높이기 위해 연구된 인풋-피딩(Input-feeding) 기술과 카피넷(CopyNet) 기술을 한국어 의미역 결정에 적용하였다. 실험 결과, Korean PropBank 데이터에서 79.42%의 레이블 단위 f1-score, 71.58%의 어절 단위 f1-score를 보였다.

  • PDF

Korean Semantic Role Labeling using Stacked Bidirectional LSTM-CRFs (Stacked Bidirectional LSTM-CRFs를 이용한 한국어 의미역 결정)

  • Bae, Jangseong;Lee, Changki
    • Journal of KIISE
    • /
    • v.44 no.1
    • /
    • pp.36-43
    • /
    • 2017
  • Syntactic information represents the dependency relation between predicates and arguments, and it is helpful for improving the performance of Semantic Role Labeling systems. However, syntax analysis can cause computational overhead and inherit incorrect syntactic information. To solve this problem, we exclude syntactic information and use only morpheme information to construct Semantic Role Labeling systems. In this study, we propose an end-to-end SRL system that only uses morpheme information with Stacked Bidirectional LSTM-CRFs model by extending the LSTM RNN that is suitable for sequence labeling problem. Our experimental results show that our proposed model has better performance, as compare to other models.

Compound Noun Decomposition by using Syllable-based Embedding and Deep Learning (음절 단위 임베딩과 딥러닝 기법을 이용한 복합명사 분해)

  • Lee, Hyun Young;Kang, Seung Shik
    • Smart Media Journal
    • /
    • v.8 no.2
    • /
    • pp.74-79
    • /
    • 2019
  • Traditional compound noun decomposition algorithms often face challenges of decomposing compound nouns into separated nouns when unregistered unit noun is included. It is very difficult for those traditional approach to handle such issues because it is impossible to register all existing unit nouns into the dictionary such as proper nouns, coined words, and foreign words in advance. In this paper, in order to solve this problem, compound noun decomposition problem is defined as tag sequence labeling problem and compound noun decomposition method to use syllable unit embedding and deep learning technique is proposed. To recognize unregistered unit nouns without constructing unit noun dictionary, compound nouns are decomposed into unit nouns by using LSTM and linear-chain CRF expressing each syllable that constitutes a compound noun in the continuous vector space.

The Research of Q-edge Labeling on Binomial Trees related to the Graph Embedding (그래프 임베딩과 관련된 이항 트리에서의 Q-에지 번호매김에 관한 연구)

  • Kim Yong-Seok
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.1
    • /
    • pp.27-34
    • /
    • 2005
  • In this paper, we propose the Q-edge labeling method related to the graph embedding problem in binomial trees. This result is able to design a new reliable interconnection networks with maximum connectivity using Q-edge labels as jump sequence of circulant graph. The circulant graph is a generalization of Harary graph which is a solution of the optimal problem to design a maximum connectivity graph consists of n vertices End e edgies. And this topology has optimal broadcasting because of having binomial trees as spanning tree.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

Mention Detection Using Pointer Networks for Coreference Resolution

  • Park, Cheoneum;Lee, Changki;Lim, Soojong
    • ETRI Journal
    • /
    • v.39 no.5
    • /
    • pp.652-661
    • /
    • 2017
  • A mention has a noun or noun phrase as its head and constructs a chunk that defines any meaning, including a modifier. Mention detection refers to the extraction of mentions from a document. In mentions, coreference resolution refers to determining any mentions that have the same meaning. Pointer networks, which are models based on a recurrent neural network encoder-decoder, outputs a list of elements corresponding to an input sequence. In this paper, we propose mention detection using pointer networks. This approach can solve the problem of overlapped mention detection, which cannot be solved by a sequence labeling approach. The experimental results show that the performance of the proposed mention detection approach is F1 of 80.75%, which is 8% higher than rule-based mention detection, and the performance of the coreference resolution has a CoNLL F1 of 56.67% (mention boundary), which is 7.68% higher than coreference resolution using rule-based mention detection.

Mention Detection using Bidirectional LSTM-CRF Model (Bidirectional LSTM-CRF 모델을 이용한 멘션탐지)

  • Park, Cheoneum;Lee, Changki
    • Annual Conference on Human and Language Technology
    • /
    • 2015.10a
    • /
    • pp.224-227
    • /
    • 2015
  • 상호참조해결은 특정 개체에 대해 다르게 표현한 단어들을 서로 연관지어 주며, 이러한 개체에 대해 표현한 단어들을 멘션(mention)이라 하며, 이런 멘션을 찾아내는 것을 멘션탐지(mention detection)라 한다. 멘션은 명사나 명사구를 기반으로 정의되며, 명사구의 경우에는 수식어를 포함하기 때문에 멘션탐지를 순차 데이터 문제(sequence labeling problem)로 정의할 수 있다. 순차 데이터 문제에는 Recurrent Neural Network(RNN) 종류의 모델을 적용할 수 있으며, 모델들은 Long Short-Term Memory(LSTM) RNN, LSTM Recurrent CRF(LSTM-CRF), Bidirectional LSTM-CRF(Bi-LSTM-CRF) 등이 있다. LSTM-RNN은 기존 RNN의 그레디언트 소멸 문제(vanishing gradient problem)를 해결하였으며, LSTM-CRF는 출력 결과에 의존성을 부여하여 순차 데이터 문제에 더욱 최적화 하였다. Bi-LSTM-CRF는 과거입력자질과 미래입력자질을 함께 학습하는 방법으로 최근에 가장 좋은 성능을 보이고 있다. 이에 따라, 본 논문에서는 멘션탐지에 Bi-LSTM-CRF를 적용할 것을 제안하며, 각 딥 러닝 모델들에 대한 비교실험을 보인다.

  • PDF