• Title/Summary/Keyword: Korean machine reading comprehension

Search Result 22, Processing Time 0.03 seconds

S2-Net: Machine reading comprehension with SRU-based self-matching networks

  • Park, Cheoneum;Lee, Changki;Hong, Lynn;Hwang, Yigyu;Yoo, Taejoon;Jang, Jaeyong;Hong, Yunki;Bae, Kyung-Hoon;Kim, Hyun-Ki
    • ETRI Journal
    • /
    • v.41 no.3
    • /
    • pp.371-382
    • /
    • 2019
  • Machine reading comprehension is the task of understanding a given context and finding the correct response in that context. A simple recurrent unit (SRU) is a model that solves the vanishing gradient problem in a recurrent neural network (RNN) using a neural gate, such as a gated recurrent unit (GRU) and long short-term memory (LSTM); moreover, it removes the previous hidden state from the input gate to improve the speed compared to GRU and LSTM. A self-matching network, used in R-Net, can have a similar effect to coreference resolution because the self-matching network can obtain context information of a similar meaning by calculating the attention weight for its own RNN sequence. In this paper, we construct a dataset for Korean machine reading comprehension and propose an $S^2-Net$ model that adds a self-matching layer to an encoder RNN using multilayer SRU. The experimental results show that the proposed $S^2-Net$ model has performance of single 68.82% EM and 81.25% F1, and ensemble 70.81% EM, 82.48% F1 in the Korean machine reading comprehension test dataset, and has single 71.30% EM and 80.37% F1 and ensemble 73.29% EM and 81.54% F1 performance in the SQuAD dev dataset.

Korean Machine Reading Comprehension for Patent Consultation Using BERT (BERT를 이용한 한국어 특허상담 기계독해)

  • Min, Jae-Ok;Park, Jin-Woo;Jo, Yu-Jeong;Lee, Bong-Gun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.4
    • /
    • pp.145-152
    • /
    • 2020
  • MRC (Machine reading comprehension) is the AI NLP task that predict the answer for user's query by understanding of the relevant document and which can be used in automated consult services such as chatbots. Recently, the BERT (Pre-training of Deep Bidirectional Transformers for Language Understanding) model, which shows high performance in various fields of natural language processing, have two phases. First phase is Pre-training the big data of each domain. And second phase is fine-tuning the model for solving each NLP tasks as a prediction. In this paper, we have made the Patent MRC dataset and shown that how to build the patent consultation training data for MRC task. And we propose the method to improve the performance of the MRC task using the Pre-trained Patent-BERT model by the patent consultation corpus and the language processing algorithm suitable for the machine learning of the patent counseling data. As a result of experiment, we show that the performance of the method proposed in this paper is improved to answer the patent counseling query.

Open Domain Machine Reading Comprehension using InferSent (InferSent를 활용한 오픈 도메인 기계독해)

  • Jeong-Hoon, Kim;Jun-Yeong, Kim;Jun, Park;Sung-Wook, Park;Se-Hoon, Jung;Chun-Bo, Sim
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.89-96
    • /
    • 2022
  • An open domain machine reading comprehension is a model that adds a function to search paragraphs as there are no paragraphs related to a given question. Document searches have an issue of lower performance with a lot of documents despite abundant research with word frequency based TF-IDF. Paragraph selections also have an issue of not extracting paragraph contexts, including sentence characteristics accurately despite a lot of research with word-based embedding. Document reading comprehension has an issue of slow learning due to the growing number of parameters despite a lot of research on BERT. Trying to solve these three issues, this study used BM25 which considered even sentence length and InferSent to get sentence contexts, and proposed an open domain machine reading comprehension with ALBERT to reduce the number of parameters. An experiment was conducted with SQuAD1.1 datasets. BM25 recorded a higher performance of document research than TF-IDF by 3.2%. InferSent showed a higher performance in paragraph selection than Transformer by 0.9%. Finally, as the number of paragraphs increased in document comprehension, ALBERT was 0.4% higher in EM and 0.2% higher in F1.

I-QANet: Improved Machine Reading Comprehension using Graph Convolutional Networks (I-QANet: 그래프 컨볼루션 네트워크를 활용한 향상된 기계독해)

  • Kim, Jeong-Hoon;Kim, Jun-Yeong;Park, Jun;Park, Sung-Wook;Jung, Se-Hoon;Sim, Chun-Bo
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.11
    • /
    • pp.1643-1652
    • /
    • 2022
  • Most of the existing machine reading research has used Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) algorithms as networks. Among them, RNN was slow in training, and Question Answering Network (QANet) was announced to improve training speed. QANet is a model composed of CNN and self-attention. CNN extracts semantic and syntactic information well from the local corpus, but there is a limit to extracting the corresponding information from the global corpus. Graph Convolutional Networks (GCN) extracts semantic and syntactic information relatively well from the global corpus. In this paper, to take advantage of this strength of GCN, we propose I-QANet, which changed the CNN of QANet to GCN. The proposed model performed 1.2 times faster than the baseline in the Stanford Question Answering Dataset (SQuAD) dataset and showed 0.2% higher performance in Exact Match (EM) and 0.7% higher in F1. Furthermore, in the Korean Question Answering Dataset (KorQuAD) dataset consisting only of Korean, the learning time was 1.1 times faster than the baseline, and the EM and F1 performance were also 0.9% and 0.7% higher, respectively.

Korean Machine Reading Comprehension for Patent Consultation using BERT (BERT를 이용한 한국어 특허상담 기계독해)

  • Min, Jae-Ok;Park, Jin-Woo;Jo, Yu-Jeong;Lee, Bong-Gun;Hwang, Kwang-Su;Park, So-Hee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.767-769
    • /
    • 2019
  • 기계독해는(Machine reading comprehension) 사용자 질의에 대한 답변이 될 수 있는 내용을 기계가 문서를 이해하여 추론하는 것을 말하며 기계독해를 이용해서 챗봇과 같은 자동상담 서비스에 활용할 수 있다. 최근 자연어처리 분야에서 많은 성능 향상을 보이고 있는 BERT모델을 기계독해 분야에 적용 할 수 있다. 본 논문에서는 특허상담 분야에서 기계독해 task 성능 향상을 위해 특허상담 코퍼스를 사용하여 사전학습(Pre-training)한 BERT모델과 특허상담 기계학습에 적합한 언어처리 기법을 추가하여 성능을 올릴 수 있는 방안을 제안하였고, 본 논문에서 제안한 방법을 사용하여 특허상담 질의에 대한 답변 결정에서 성능이 향상됨을 보였다.

Risk Prediction Model of Legal Contract Based on Korean Machine Reading Comprehension (한국어 기계독해 기반 법률계약서 리스크 예측 모델)

  • Lee, Chi Hoon;Woo, Noh Ji;Jeong, Jae Hoon;Joo, Kyung Sik;Lee, Dong Hee
    • Journal of Information Technology Services
    • /
    • v.20 no.1
    • /
    • pp.131-143
    • /
    • 2021
  • Commercial transactions, one of the pillars of the capitalist economy, are occurring countless times every day, especially small and medium-sized businesses. However, small and medium-sized enterprises are bound to be the legal underdogs in contracts for commercial transactions and do not receive legal support for contracts for fair and legitimate commercial transactions. When subcontracting contracts are concluded among small and medium-sized enterprises, 58.2% of them do not apply standard contracts and sign contracts that have not undergone legal review. In order to support small and medium-sized enterprises' fair and legitimate contracts, small and medium-sized enterprises can be protected from legal threats if they can reduce the risk of signing contracts by analyzing various risks in the contract and analyzing and informing them of toxic clauses and omitted contracts in advance. We propose a risk prediction model for the machine reading-based legal contract to minimize legal damage to small and medium-sized business owners in the legal blind spots. We have established our own set of legal questions and answers based on the legal data disclosed for the purpose of building a model specialized in legal contracts. Quantitative verification was carried out through indicators such as EM and F1 Score by applying pine tuning and hostile learning to pre-learned machine reading models. The highest F1 score was 87.93, with an EM value of 72.41.

S2-Net: Korean Machine Reading Comprehension with SRU-based Self-matching Network (S2-Net: SRU 기반 Self-matching Network를 이용한 한국어 기계 독해)

  • Park, Cheoneum;Lee, Changki;Hong, Sulyn;Hwang, Yigyu;Yoo, Taejoon;Kim, Hyunki
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.35-40
    • /
    • 2017
  • 기계 독해(Machine reading comprehension)는 주어진 문맥을 이해하고, 질문에 적합한 답을 문맥 내에서 찾는 문제이다. Simple Recurrent Unit (SRU)은 Gated Recurrent Unit (GRU)등과 같이 neural gate를 이용하여 Recurrent Neural Network (RNN)에서 발생하는 vanishing gradient problem을 해결하고, gate 입력에서 이전 hidden state를 제거하여 GRU보다 속도를 향상시킨 모델이며, Self-matching Network는 R-Net 모델에서 사용된 것으로, 자기 자신의 RNN sequence에 대하여 어텐션 가중치 (attention weight)를 계산하여 비슷한 의미 문맥 정보를 볼 수 있기 때문에 상호참조해결과 유사한 효과를 볼 수 있다. 본 논문에서는 한국어 기계 독해 데이터 셋을 구축하고, 여러 층의 SRU를 이용한 Encoder에 Self-matching layer를 추가한 $S^2$-Net 모델을 제안한다. 실험 결과, 본 논문에서 제안한 $S^2$-Net 모델이 한국어 기계 독해 데이터 셋에서 EM 65.84%, F1 78.98%의 성능을 보였다.

  • PDF

S2-Net: Korean Machine Reading Comprehension with SRU-based Self-matching Network (S2-Net: SRU 기반 Self-matching Network를 이용한 한국어 기계 독해)

  • Park, Cheoneum;Lee, Changki;Hong, Sulyn;Hwang, Yigyu;Yoo, Taejoon;Kim, Hyunki
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.35-40
    • /
    • 2017
  • 기계 독해(Machine reading comprehension)는 주어진 문맥을 이해하고, 질문에 적합한 답을 문맥 내에서 찾는 문제이다. Simple Recurrent Unit (SRU)은 Gated Recurrent Unit (GRU)등과 같이 neural gate를 이용하여 Recurrent Neural Network (RNN)에서 발생하는 vanishing gradient problem을 해결하고, gate 입력에서 이전 hidden state를 제거하여 GRU보다 속도를 향상시킨 모델이며, Self-matching Network는 R-Net 모델에서 사용된 것으로, 자기 자신의 RNN sequence에 대하여 어텐션 가중치 (attention weight)를 계산하여 비슷한 의미 문맥 정보를 볼 수 있기 때문에 상호참조해결과 유사한 효과를 볼 수 있다. 본 논문에서는 한국어 기계 독해 데이터 셋을 구축하고, 여러 층의 SRU를 이용한 Encoder에 Self-matching layer를 추가한 $S^2$-Net 모델을 제안한다. 실험 결과, 본 논문에서 제안한 $S^2$-Net 모델이 한국어 기계 독해 데이터 셋에서 EM 65.84%, F1 78.98%의 성능을 보였다.

  • PDF

KorPatELECTRA : A Pre-trained Language Model for Korean Patent Literature to improve performance in the field of natural language processing(Korean Patent ELECTRA)

  • Jang, Ji-Mo;Min, Jae-Ok;Noh, Han-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.15-23
    • /
    • 2022
  • In the field of patents, as NLP(Natural Language Processing) is a challenging task due to the linguistic specificity of patent literature, there is an urgent need to research a language model optimized for Korean patent literature. Recently, in the field of NLP, there have been continuous attempts to establish a pre-trained language model for specific domains to improve performance in various tasks of related fields. Among them, ELECTRA is a pre-trained language model by Google using a new method called RTD(Replaced Token Detection), after BERT, for increasing training efficiency. The purpose of this paper is to propose KorPatELECTRA pre-trained on a large amount of Korean patent literature data. In addition, optimal pre-training was conducted by preprocessing the training corpus according to the characteristics of the patent literature and applying patent vocabulary and tokenizer. In order to confirm the performance, KorPatELECTRA was tested for NER(Named Entity Recognition), MRC(Machine Reading Comprehension), and patent classification tasks using actual patent data, and the most excellent performance was verified in all the three tasks compared to comparative general-purpose language models.

The Unsupervised Learning-based Language Modeling of Word Comprehension in Korean

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.11
    • /
    • pp.41-49
    • /
    • 2019
  • We are to build an unsupervised machine learning-based language model which can estimate the amount of information that are in need to process words consisting of subword-level morphemes and syllables. We are then to investigate whether the reading times of words reflecting their morphemic and syllabic structures are predicted by an information-theoretic measure such as surprisal. Specifically, the proposed Morfessor-based unsupervised machine learning model is first to be trained on the large dataset of sentences on Sejong Corpus and is then to be applied to estimate the information-theoretic measure on each word in the test data of Korean words. The reading times of the words in the test data are to be recruited from Korean Lexicon Project (KLP) Database. A comparison between the information-theoretic measures of the words in point and the corresponding reading times by using a linear mixed effect model reveals a reliable correlation between surprisal and reading time. We conclude that surprisal is positively related to the processing effort (i.e. reading time), confirming the surprisal hypothesis.