• Title/Summary/Keyword: Machine Reading Comprehension

Search Result 41, Processing Time 0.029 seconds

S2-Net: Machine reading comprehension with SRU-based self-matching networks

  • Park, Cheoneum;Lee, Changki;Hong, Lynn;Hwang, Yigyu;Yoo, Taejoon;Jang, Jaeyong;Hong, Yunki;Bae, Kyung-Hoon;Kim, Hyun-Ki
    • ETRI Journal
    • /
    • v.41 no.3
    • /
    • pp.371-382
    • /
    • 2019
  • Machine reading comprehension is the task of understanding a given context and finding the correct response in that context. A simple recurrent unit (SRU) is a model that solves the vanishing gradient problem in a recurrent neural network (RNN) using a neural gate, such as a gated recurrent unit (GRU) and long short-term memory (LSTM); moreover, it removes the previous hidden state from the input gate to improve the speed compared to GRU and LSTM. A self-matching network, used in R-Net, can have a similar effect to coreference resolution because the self-matching network can obtain context information of a similar meaning by calculating the attention weight for its own RNN sequence. In this paper, we construct a dataset for Korean machine reading comprehension and propose an $S^2-Net$ model that adds a self-matching layer to an encoder RNN using multilayer SRU. The experimental results show that the proposed $S^2-Net$ model has performance of single 68.82% EM and 81.25% F1, and ensemble 70.81% EM, 82.48% F1 in the Korean machine reading comprehension test dataset, and has single 71.30% EM and 80.37% F1 and ensemble 73.29% EM and 81.54% F1 performance in the SQuAD dev dataset.

Machine Reading Comprehension-based Question and Answering System for Search and Analysis of Safety Standards (안전기준의 검색과 분석을 위한 기계독해 기반 질의응답 시스템)

  • Kim, Minho;Cho, Sanghyun;Park, Dugkeun;Kwon, Hyuk-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.351-360
    • /
    • 2020
  • If various unreasonable safety standards are preemptively and effectively readjusted, the risk of accidents can be reduced. In this paper, we proposed a machine reading comprehension-based safety standard Q&A system to secure supporting technology for effective search and analysis of safety standards for integrated and systematic management of safety standards. The proposed model finds documents related to safety standard questions in the various laws and regulations, and then divides these documents into provisions. Only those provisions that are likely to contain the answer to the question are selected, and then the BERT-based machine reading comprehension model is used to find answers to questions related to safety standards. When the proposed safety standard Q&A system is applied to KorQuAD dataset, the performance of EM 40.42% and F1 55.34% are shown.

VS3-NET: Neural variational inference model for machine-reading comprehension

  • Park, Cheoneum;Lee, Changki;Song, Heejun
    • ETRI Journal
    • /
    • v.41 no.6
    • /
    • pp.771-781
    • /
    • 2019
  • We propose the VS3-NET model to solve the task of question answering questions with machine-reading comprehension that searches for an appropriate answer in a given context. VS3-NET is a model that trains latent variables for each question using variational inferences based on a model of a simple recurrent unit-based sentences and self-matching networks. The types of questions vary, and the answers depend on the type of question. To perform efficient inference and learning, we introduce neural question-type models to approximate the prior and posterior distributions of the latent variables, and we use these approximated distributions to optimize a reparameterized variational lower bound. The context given in machine-reading comprehension usually comprises several sentences, leading to performance degradation caused by context length. Therefore, we model a hierarchical structure using sentence encoding, in which as the context becomes longer, the performance degrades. Experimental results show that the proposed VS3-NET model has an exact-match score of 76.8% and an F1 score of 84.5% on the SQuAD test set.

Korean Machine Reading Comprehension for Patent Consultation Using BERT (BERT를 이용한 한국어 특허상담 기계독해)

  • Min, Jae-Ok;Park, Jin-Woo;Jo, Yu-Jeong;Lee, Bong-Gun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.4
    • /
    • pp.145-152
    • /
    • 2020
  • MRC (Machine reading comprehension) is the AI NLP task that predict the answer for user's query by understanding of the relevant document and which can be used in automated consult services such as chatbots. Recently, the BERT (Pre-training of Deep Bidirectional Transformers for Language Understanding) model, which shows high performance in various fields of natural language processing, have two phases. First phase is Pre-training the big data of each domain. And second phase is fine-tuning the model for solving each NLP tasks as a prediction. In this paper, we have made the Patent MRC dataset and shown that how to build the patent consultation training data for MRC task. And we propose the method to improve the performance of the MRC task using the Pre-trained Patent-BERT model by the patent consultation corpus and the language processing algorithm suitable for the machine learning of the patent counseling data. As a result of experiment, we show that the performance of the method proposed in this paper is improved to answer the patent counseling query.

Open Domain Machine Reading Comprehension using InferSent (InferSent를 활용한 오픈 도메인 기계독해)

  • Jeong-Hoon, Kim;Jun-Yeong, Kim;Jun, Park;Sung-Wook, Park;Se-Hoon, Jung;Chun-Bo, Sim
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.89-96
    • /
    • 2022
  • An open domain machine reading comprehension is a model that adds a function to search paragraphs as there are no paragraphs related to a given question. Document searches have an issue of lower performance with a lot of documents despite abundant research with word frequency based TF-IDF. Paragraph selections also have an issue of not extracting paragraph contexts, including sentence characteristics accurately despite a lot of research with word-based embedding. Document reading comprehension has an issue of slow learning due to the growing number of parameters despite a lot of research on BERT. Trying to solve these three issues, this study used BM25 which considered even sentence length and InferSent to get sentence contexts, and proposed an open domain machine reading comprehension with ALBERT to reduce the number of parameters. An experiment was conducted with SQuAD1.1 datasets. BM25 recorded a higher performance of document research than TF-IDF by 3.2%. InferSent showed a higher performance in paragraph selection than Transformer by 0.9%. Finally, as the number of paragraphs increased in document comprehension, ALBERT was 0.4% higher in EM and 0.2% higher in F1.

HTML Tag Depth Embedding: An Input Embedding Method of the BERT Model for Improving Web Document Reading Comprehension Performance (HTML 태그 깊이 임베딩: 웹 문서 기계 독해 성능 개선을 위한 BERT 모델의 입력 임베딩 기법)

  • Mok, Jin-Wang;Jang, Hyun Jae;Lee, Hyun-Seob
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.5
    • /
    • pp.17-25
    • /
    • 2022
  • Recently the massive amount of data has been generated because of the number of edge devices increases. And especially, the number of raw unstructured HTML documents has been increased. Therefore, MRC(Machine Reading Comprehension) in which a natural language processing model finds the important information within an HTML document is becoming more important. In this paper, we propose HTDE(HTML Tag Depth Embedding Method), which allows the BERT to train the depth of the HTML document structure. HTDE makes a tag stack from the HTML document for each input token in the BERT and then extracts the depth information. After that, we add a HTML embedding layer that takes the depth of the token as input to the step of input embedding of BERT. Since tokenization using HTDE identifies the HTML document structures through the relationship of surrounding tokens, HTDE improves the accuracy of BERT for HTML documents. Finally, we demonstrated that the proposed idea showing the higher accuracy compared than the accuracy using the conventional embedding of BERT.

I-QANet: Improved Machine Reading Comprehension using Graph Convolutional Networks (I-QANet: 그래프 컨볼루션 네트워크를 활용한 향상된 기계독해)

  • Kim, Jeong-Hoon;Kim, Jun-Yeong;Park, Jun;Park, Sung-Wook;Jung, Se-Hoon;Sim, Chun-Bo
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.11
    • /
    • pp.1643-1652
    • /
    • 2022
  • Most of the existing machine reading research has used Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) algorithms as networks. Among them, RNN was slow in training, and Question Answering Network (QANet) was announced to improve training speed. QANet is a model composed of CNN and self-attention. CNN extracts semantic and syntactic information well from the local corpus, but there is a limit to extracting the corresponding information from the global corpus. Graph Convolutional Networks (GCN) extracts semantic and syntactic information relatively well from the global corpus. In this paper, to take advantage of this strength of GCN, we propose I-QANet, which changed the CNN of QANet to GCN. The proposed model performed 1.2 times faster than the baseline in the Stanford Question Answering Dataset (SQuAD) dataset and showed 0.2% higher performance in Exact Match (EM) and 0.7% higher in F1. Furthermore, in the Korean Question Answering Dataset (KorQuAD) dataset consisting only of Korean, the learning time was 1.1 times faster than the baseline, and the EM and F1 performance were also 0.9% and 0.7% higher, respectively.

Korean Machine Reading Comprehension for Patent Consultation using BERT (BERT를 이용한 한국어 특허상담 기계독해)

  • Min, Jae-Ok;Park, Jin-Woo;Jo, Yu-Jeong;Lee, Bong-Gun;Hwang, Kwang-Su;Park, So-Hee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.767-769
    • /
    • 2019
  • 기계독해는(Machine reading comprehension) 사용자 질의에 대한 답변이 될 수 있는 내용을 기계가 문서를 이해하여 추론하는 것을 말하며 기계독해를 이용해서 챗봇과 같은 자동상담 서비스에 활용할 수 있다. 최근 자연어처리 분야에서 많은 성능 향상을 보이고 있는 BERT모델을 기계독해 분야에 적용 할 수 있다. 본 논문에서는 특허상담 분야에서 기계독해 task 성능 향상을 위해 특허상담 코퍼스를 사용하여 사전학습(Pre-training)한 BERT모델과 특허상담 기계학습에 적합한 언어처리 기법을 추가하여 성능을 올릴 수 있는 방안을 제안하였고, 본 논문에서 제안한 방법을 사용하여 특허상담 질의에 대한 답변 결정에서 성능이 향상됨을 보였다.

Risk Prediction Model of Legal Contract Based on Korean Machine Reading Comprehension (한국어 기계독해 기반 법률계약서 리스크 예측 모델)

  • Lee, Chi Hoon;Woo, Noh Ji;Jeong, Jae Hoon;Joo, Kyung Sik;Lee, Dong Hee
    • Journal of Information Technology Services
    • /
    • v.20 no.1
    • /
    • pp.131-143
    • /
    • 2021
  • Commercial transactions, one of the pillars of the capitalist economy, are occurring countless times every day, especially small and medium-sized businesses. However, small and medium-sized enterprises are bound to be the legal underdogs in contracts for commercial transactions and do not receive legal support for contracts for fair and legitimate commercial transactions. When subcontracting contracts are concluded among small and medium-sized enterprises, 58.2% of them do not apply standard contracts and sign contracts that have not undergone legal review. In order to support small and medium-sized enterprises' fair and legitimate contracts, small and medium-sized enterprises can be protected from legal threats if they can reduce the risk of signing contracts by analyzing various risks in the contract and analyzing and informing them of toxic clauses and omitted contracts in advance. We propose a risk prediction model for the machine reading-based legal contract to minimize legal damage to small and medium-sized business owners in the legal blind spots. We have established our own set of legal questions and answers based on the legal data disclosed for the purpose of building a model specialized in legal contracts. Quantitative verification was carried out through indicators such as EM and F1 Score by applying pine tuning and hostile learning to pre-learned machine reading models. The highest F1 score was 87.93, with an EM value of 72.41.

S2-Net: Korean Machine Reading Comprehension with SRU-based Self-matching Network (S2-Net: SRU 기반 Self-matching Network를 이용한 한국어 기계 독해)

  • Park, Cheoneum;Lee, Changki;Hong, Sulyn;Hwang, Yigyu;Yoo, Taejoon;Kim, Hyunki
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.35-40
    • /
    • 2017
  • 기계 독해(Machine reading comprehension)는 주어진 문맥을 이해하고, 질문에 적합한 답을 문맥 내에서 찾는 문제이다. Simple Recurrent Unit (SRU)은 Gated Recurrent Unit (GRU)등과 같이 neural gate를 이용하여 Recurrent Neural Network (RNN)에서 발생하는 vanishing gradient problem을 해결하고, gate 입력에서 이전 hidden state를 제거하여 GRU보다 속도를 향상시킨 모델이며, Self-matching Network는 R-Net 모델에서 사용된 것으로, 자기 자신의 RNN sequence에 대하여 어텐션 가중치 (attention weight)를 계산하여 비슷한 의미 문맥 정보를 볼 수 있기 때문에 상호참조해결과 유사한 효과를 볼 수 있다. 본 논문에서는 한국어 기계 독해 데이터 셋을 구축하고, 여러 층의 SRU를 이용한 Encoder에 Self-matching layer를 추가한 $S^2$-Net 모델을 제안한다. 실험 결과, 본 논문에서 제안한 $S^2$-Net 모델이 한국어 기계 독해 데이터 셋에서 EM 65.84%, F1 78.98%의 성능을 보였다.

  • PDF