• Title/Summary/Keyword: Machine Reading Comprehension

Search Result 41, Processing Time 0.029 seconds

S2-Net: Korean Machine Reading Comprehension with SRU-based Self-matching Network (S2-Net: SRU 기반 Self-matching Network를 이용한 한국어 기계 독해)

  • Park, Cheoneum;Lee, Changki;Hong, Sulyn;Hwang, Yigyu;Yoo, Taejoon;Kim, Hyunki
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.35-40
    • /
    • 2017
  • 기계 독해(Machine reading comprehension)는 주어진 문맥을 이해하고, 질문에 적합한 답을 문맥 내에서 찾는 문제이다. Simple Recurrent Unit (SRU)은 Gated Recurrent Unit (GRU)등과 같이 neural gate를 이용하여 Recurrent Neural Network (RNN)에서 발생하는 vanishing gradient problem을 해결하고, gate 입력에서 이전 hidden state를 제거하여 GRU보다 속도를 향상시킨 모델이며, Self-matching Network는 R-Net 모델에서 사용된 것으로, 자기 자신의 RNN sequence에 대하여 어텐션 가중치 (attention weight)를 계산하여 비슷한 의미 문맥 정보를 볼 수 있기 때문에 상호참조해결과 유사한 효과를 볼 수 있다. 본 논문에서는 한국어 기계 독해 데이터 셋을 구축하고, 여러 층의 SRU를 이용한 Encoder에 Self-matching layer를 추가한 $S^2$-Net 모델을 제안한다. 실험 결과, 본 논문에서 제안한 $S^2$-Net 모델이 한국어 기계 독해 데이터 셋에서 EM 65.84%, F1 78.98%의 성능을 보였다.

  • PDF

Machine Reading Comprehension System to Solve Unanswerable Problems using Method of Mimicking Reading Comprehension Patterns (기계독해 시스템에서 답변 불가능 문제 해결을 위한 독해 패턴 모방 방법)

  • Lee, Yejin;Jang, Youngjin;Lee, Hyeon-gu;Shin, Dongwook;Park, Chanhoon;Kang, Inho;Kim, Harksoo
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.139-143
    • /
    • 2021
  • 최근 대용량 말뭉치를 기반으로 한 언어 모델이 개발됨에 따라 다양한 자연어처리 분야에서 사람보다 높은 성능을 보이는 시스템이 제안되었다. 이에 따라, 더 어렵고 복잡한 문제를 해결하기 위한 데이터셋들이 공개되었으며 대표적으로 기계독해 작업에서는 시스템이 질문에 대해 답변할 수 없다고 판단할 수 있는지 평가하기 위한 데이터셋이 공개되었다. 입력 받은 데이터에 대해 답변할 수 없다고 판단하는 것은 실제 애플리케이션에서 중요한 문제이기 때문에, 이를 해결하기 위한 연구도 다양하게 진행되었다. 본 논문에서는 문서를 이해하여 답변할 수 없는 데이터에 대해 효과적으로 판단할 수 있는 기계독해 시스템을 제안한다. 제안 모델은 문서의 내용과 질문에 대한 이해도가 낮을 경우 정확한 정답을 맞히지 못하는 사람의 독해 패턴에서 착안하여 기계독해 시스템의 문서 이해도를 높이고자 한다. KLUE-MRC 개발 데이터를 통한 실험에서 EM, Rouge-w 기준으로 각각 71.73%, 76.80%을 보였다.

  • PDF

Multi-level Attention Fusion Network for Machine Reading Comprehension (Multi-level Attention Fusion을 이용한 기계독해)

  • Park, Kwang-Hyeon;Na, Seung-Hoon;Choi, Yun-Su;Chang, Du-Seong
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.259-262
    • /
    • 2018
  • 기계독해의 목표는 기계가 주어진 문맥을 이해하고 문맥에 대한 질문에 대답할 수 있도록 하는 것이다. 본 논문에서는 Multi-level Attention에 정보를 효율적으로 융합 수 있는 Fusion 함수를 결합하고, Answer module에Stochastic multi-step answer를 적용하여 SQuAD dev 데이터 셋에서 EM=78.63%, F1=86.36%의 성능을 보였다.

  • PDF

KorPatELECTRA : A Pre-trained Language Model for Korean Patent Literature to improve performance in the field of natural language processing(Korean Patent ELECTRA)

  • Jang, Ji-Mo;Min, Jae-Ok;Noh, Han-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.15-23
    • /
    • 2022
  • In the field of patents, as NLP(Natural Language Processing) is a challenging task due to the linguistic specificity of patent literature, there is an urgent need to research a language model optimized for Korean patent literature. Recently, in the field of NLP, there have been continuous attempts to establish a pre-trained language model for specific domains to improve performance in various tasks of related fields. Among them, ELECTRA is a pre-trained language model by Google using a new method called RTD(Replaced Token Detection), after BERT, for increasing training efficiency. The purpose of this paper is to propose KorPatELECTRA pre-trained on a large amount of Korean patent literature data. In addition, optimal pre-training was conducted by preprocessing the training corpus according to the characteristics of the patent literature and applying patent vocabulary and tokenizer. In order to confirm the performance, KorPatELECTRA was tested for NER(Named Entity Recognition), MRC(Machine Reading Comprehension), and patent classification tasks using actual patent data, and the most excellent performance was verified in all the three tasks compared to comparative general-purpose language models.

The Unsupervised Learning-based Language Modeling of Word Comprehension in Korean

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.11
    • /
    • pp.41-49
    • /
    • 2019
  • We are to build an unsupervised machine learning-based language model which can estimate the amount of information that are in need to process words consisting of subword-level morphemes and syllables. We are then to investigate whether the reading times of words reflecting their morphemic and syllabic structures are predicted by an information-theoretic measure such as surprisal. Specifically, the proposed Morfessor-based unsupervised machine learning model is first to be trained on the large dataset of sentences on Sejong Corpus and is then to be applied to estimate the information-theoretic measure on each word in the test data of Korean words. The reading times of the words in the test data are to be recruited from Korean Lexicon Project (KLP) Database. A comparison between the information-theoretic measures of the words in point and the corresponding reading times by using a linear mixed effect model reveals a reliable correlation between surprisal and reading time. We conclude that surprisal is positively related to the processing effort (i.e. reading time), confirming the surprisal hypothesis.

Test Dataset for validating the meaning of Table Machine Reading Language Model (표 기계독해 언어 모형의 의미 검증을 위한 테스트 데이터셋)

  • YU, Jae-Min;Cho, Sanghyun;Kwon, Hyuk-Chul
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.164-167
    • /
    • 2022
  • In table Machine comprehension, the knowledge required for language models or the structural form of tables changes depending on the domain, showing a greater performance degradation compared to text data. In this paper, we propose a pre-learning data construction method and an adversarial learning method through meaningful tabular data selection for constructing a pre-learning table language model robust to these domain changes in table machine reading. In order to detect tabular data sed for decoration of web documents without structural information from the extracted table data, a rule through heuristic was defined to identify head data and select table data was applied. An adversarial learning method between tabular data and infobax data with knowledge information about entities was applied. When the data was refined compared to when it was trained with the existing unrefined data, F1 3.45 and EM 4.14 increased in the KorQuAD table data, and F1 19.38, EM 4.22 compared to when the data was not refined in the Spec table QA data showed increased performance.

  • PDF

The Effect of College-Language Small Group Cooperative Learning on English Reading Comprehension, English Reading Motivation and Cooperative Learning Awareness (대학 교양영어 소집단 협동학습이 영어독해력, 영어읽기동기, 협동학습인식에 미치는 영향)

  • Lee, Young-Eun
    • Journal of Digital Convergence
    • /
    • v.18 no.6
    • /
    • pp.81-91
    • /
    • 2020
  • The purpose of this study is to analyze the effect of group co-learning on English reading ability and motivation and the change in group co-learning perception after planning and applying a group co-study class program that can be applied in university liberal arts English class. In order to achieve this goal, the experiment team (34 students) conducted the class from September 2 to December 13, 2019 for 62 freshmen who participated in the compulsory liberal arts English class at the four-year university in North Chungcheong Province, and the control team (28 students) conducted the class as a typical lecture class based on the basis of cooperative learning. The English proficiency of the learners was approached by dividing the area of academic proficiency into English reading skills and the area of justice into English reading motivations. The pre-experimental learners' English reading skills were measured by excerpting the national level educational achievement assessment (high 2). The research results are as follows. First, it was shown that the English reading ability score of a group that applied group cooperative learning and the English reading ability score of a group that did not apply group cooperative learning were statistically significant differences. Second, there was a difference between the English reading motivation score of the group applying the convocation group cooperative learning and the English reading motivation score of the group not applied. Third, the change in the perception of groups applying the convocation group cooperative learning occurred before and after the experiment. This study found that the awareness of English reading, English reading motivation, and cooperative learning increased through cooperative learning among university students during liberal arts English classes, which has a positive effect on self-identity and so on.

A Hybrid Sentence Alignment Method for Building a Korean-English Parallel Corpus (한영 병렬 코퍼스 구축을 위한 하이브리드 기반 문장 자동 정렬 방법)

  • Park, Jung-Yeul;Cha, Jeong-Won
    • MALSORI
    • /
    • v.68
    • /
    • pp.95-114
    • /
    • 2008
  • The recent growing popularity of statistical methods in machine translation requires much more large parallel corpora. A Korean-English parallel corpus, however, is not yet enoughly available, little research on this subject is being conducted. In this paper we present a hybrid method of aligning sentences for Korean-English parallel corpora. We use bilingual news wire web pages, reading comprehension materials for English learners, computer-related technical documents and help files of localized software for building a Korean-English parallel corpus. Our hybrid method combines sentence-length based and word-correspondence based methods. We show the results of experimentation and evaluate them. Alignment results from using a full translation model are very encouraging, especially when we apply alignment results to an SMT system: 0.66% for BLEU score and 9.94% for NIST score improvement compared to the previous method.

  • PDF

Q-Net : Machine Reading Comprehension adding Question Type (Q-Net : 질문 유형을 추가한 기계 독해)

  • Kim, Jeong-Moo;Shin, Chang-Uk;Cha, Jeong-Won
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.645-648
    • /
    • 2018
  • 기계 독해는 기계가 주어진 본문을 이해하고 질문에 대한 정답을 본문 내에서 찾아내는 문제이다. 본 논문은 질문 유형을 추가하여 정답 선택에 도움을 주도록 설계하였다. 우리는 Person, Location, Date, Number, Why, How, What, Others와 같이 8개의 질문 유형을 나누고 이들이 본문의 중요 자질들과 Attention이 일어나도록 설계하였다. 제안 방법의 평가를 위해 SQuAD의 한국어 번역 데이터와 한국어 Wikipedia로 구축한 K-QuAD 데이터 셋으로 실험을 진행하였다. 제안한 모델의 실험 결과 부분 일치를 인정하여, EM 84.650%, F1 86.208%로 K-QuAD 제안 논문 실험인 BiDAF 모델보다 더 나은 성능을 얻었다.

  • PDF

Machine Reading Comprehension based Question Answering Chatbot (기계독해 기반 질의응답 챗봇)

  • Lee, Hyeon-gu;Kim, Jintae;Choi, Maengsik;Kim, Harksoo
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.35-39
    • /
    • 2018
  • 챗봇은 사람과 기계가 자연어로 된 대화를 주고받는 시스템이다. 최근 대화형 인공지능 비서 시스템이 상용화되면서 일반적인 대화와 질의응답을 함께 처리해야할 필요성이 늘어나고 있다. 본 논문에서는 기계독해 기반 질의응답과 Transformer 기반 자연어 생성 모델을 함께 사용하여 하나의 모델에서 일반적인 대화와 질의응답을 함께 하는 기계독해 기반 질의응답 챗봇을 제안한다. 제안 모델은 기계독해 모델에 일반대화를 판단하는 옵션을 추가하여 기계독해를 하면서 자체적으로 문장을 분류하고, 기계독해 결과를 통해 자연어로 된 문장을 생성한다. 실험 결과 일반적인 대화 문장과 질의를 높은 성능으로 구별하면서 기계독해의 성능은 유지하였고 자연어 생성에서도 분류에 맞는 응답을 생성하였다.

  • PDF