• Title/Summary/Keyword: Open-domain QA

Search Result 11, Processing Time 0.023 seconds

Misinformation Detection and Rectification Based on QA System and Text Similarity with COVID-19

  • Insup Lim;Namjae Cho
    • Journal of Information Technology Applications and Management
    • /
    • v.28 no.5
    • /
    • pp.41-50
    • /
    • 2021
  • As COVID-19 spread widely, and rapidly, the number of misinformation is also increasing, which WHO has referred to this phenomenon as "Infodemic". The purpose of this research is to develop detection and rectification of COVID-19 misinformation based on Open-domain QA system and text similarity. 9 testing conditions were used in this model. For open-domain QA system, 6 conditions were applied using three different types of dataset types, scientific, social media, and news, both datasets, and two different methods of choosing the answer, choosing the top answer generated from the QA system and voting from the top three answers generated from QA system. The other 3 conditions were the Closed-Domain QA system with different dataset types. The best results from the testing model were 76% using all datasets with voting from the top 3 answers outperforming by 16% from the closed-domain model.

Towards a small language model powered chain-of-reasoning for open-domain question answering

  • Jihyeon Roh;Minho Kim;Kyoungman Bae
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.11-21
    • /
    • 2024
  • We focus on open-domain question-answering tasks that involve a chain-of-reasoning, which are primarily implemented using large language models. With an emphasis on cost-effectiveness, we designed EffiChainQA, an architecture centered on the use of small language models. We employed a retrieval-based language model to address the limitations of large language models, such as the hallucination issue and the lack of updated knowledge. To enhance reasoning capabilities, we introduced a question decomposer that leverages a generative language model and serves as a key component in the chain-of-reasoning process. To generate training data for our question decomposer, we leveraged ChatGPT, which is known for its data augmentation ability. Comprehensive experiments were conducted using the HotpotQA dataset. Our method outperformed several established approaches, including the Chain-of-Thoughts approach, which is based on large language models. Moreover, our results are on par with those of state-of-the-art Retrieve-then-Read methods that utilize large language models.

REALM for Open-domain Question Answering of Korean (REALM을 이용한 한국어 오픈도메인 질의 응답)

  • Kan, Dong-Chan;Na, Seung-Hoon;Choi, Yun-Su;Lee, Hye-Woo;Chang, Du-Seong
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.192-196
    • /
    • 2020
  • 최근 딥러닝 기술의 발전에 힘입어 오픈 도메인 QA 시스템의 발전은 가속화되고 있다. 특히 IR 시스템(Information Retrieval)과 추출 기반의 기계 독해 모델을 결합한 접근 방식(IRQA)의 경우, 문서와 질문 각각을 연속 벡터로 인코딩하는 IR 시스템(Dense Retrieval)의 연구가 진행되면서 검색 성능이 전통적인 키워드 기반 IR 시스템에 비해 큰 폭으로 상승하였고, 이를 기반으로 오픈 도메인 질의응답의 성능 또한 개선 되었다. 본 논문에서는 경량화 된 BERT 모델을 기반으로 하여 Dense Retrieval 모델 ORQA와 REALM을 사전 학습하고, 한국어 오픈 도메인 QA에서 QA 성능과 검색 성능을 도출한다. 실험 결과, 키워드 기반 IR 시스템 BM25를 기반으로 했던 이전 IRQA 실험결과와 비교하여 더 적은 문서로 더 나은 QA 성능을 보였으며, 검색 결과의 경우, BM25의 성능을 뛰어넘는 결과를 보였다.

  • PDF

Graph Reasoning and Context Fusion for Multi-Task, Multi-Hop Question Answering (다중 작업, 다중 홉 질문 응답을 위한 그래프 추론 및 맥락 융합)

  • Lee, Sangui;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.8
    • /
    • pp.319-330
    • /
    • 2021
  • Recently, in the field of open domain natural language question answering, multi-task, multi-hop question answering has been studied extensively. In this paper, we propose a novel deep neural network model using hierarchical graphs to answer effectively such multi-task, multi-hop questions. The proposed model extracts different levels of contextual information from multiple paragraphs using hierarchical graphs and graph neural networks, and then utilize them to predict answer type, supporting sentences and answer spans simultaneously. Conducting experiments with the HotpotQA benchmark dataset, we show high performance and positive effects of the proposed model.

LUKE based Korean Dense Passage Retriever (LUKE 기반의 한국어 문서 검색 모델 )

  • Dongryul Ko;Changwon Kim;Jaieun Kim;Sanghyun Park
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.131-134
    • /
    • 2022
  • 자연어처리 분야 중 질의응답 태스크는 전통적으로 많은 연구가 이뤄지고 있는 분야이며, 최근 밀집 벡터를 사용한 리트리버(Dense Retriever)가 성공함에 따라 위키피디아와 같은 방대한 정보를 활용하여 답변하는 오픈 도메인 QA(Open-domain Question Answering) 연구가 활발하게 진행되고 있다. 대표적인 검색 모델인 DPR(Dense Passage Retriever)은 바이 인코더(Bi-encoder) 구조의 리트리버로서, BERT 모델 기반의 질의 인코더(Query Encoder) 및 문단 인코더(Passage Encoder)를 통해 임베딩한 벡터 간의 유사도를 비교하여 문서를 검색한다. 하지만, BERT와 같이 엔티티(Entity) 정보에 대해 추가적인 학습을 하지 않은 언어모델을 기반으로 한 리트리버는 엔티티 정보가 중요한 질문에 대한 답변 성능이 저조하다. 본 논문에서는 엔티티 중심의 질문에 대한 답변 성능 향상을 위해, 엔티티를 잘 이해할 수 있는 LUKE 모델 기반의 리트리버를 제안한다. KorQuAD 1.0 데이터셋을 활용하여 한국어 리트리버의 학습 데이터셋을 구축하고, 모델별 리트리버의 검색 성능을 비교하여 제안하는 방법의 성능 향상을 입증한다.

  • PDF

Question Answering that leverage the inherent knowledge of large language models (거대 언어 모델의 내재된 지식을 활용한 질의 응답 방법)

  • Myoseop Sim;Kyungkoo Min;Minjun Park;Jooyoung Choi;Haemin Jung;Stanley Jungkyu Choi
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.31-35
    • /
    • 2023
  • 최근에는 질의응답(Question Answering, QA) 분야에서 거대 언어 모델(Large Language Models, LLMs)의 파라미터에 내재된 지식을 활용하는 방식이 활발히 연구되고 있다. Open Domain QA(ODQA) 분야에서는 기존에 정보 검색기(retriever)-독해기(reader) 파이프라인이 주로 사용되었으나, 최근에는 거대 언어 모델이 독해 뿐만 아니라 정보 검색기의 역할까지 대신하고 있다. 본 논문에서는 거대 언어 모델의 내재된 지식을 사용해서 질의 응답에 활용하는 방법을 제안한다. 질문에 대해 답변을 하기 전에 질문과 관련된 구절을 생성하고, 이를 바탕으로 질문에 대한 답변을 생성하는 방식이다. 이 방법은 Closed-Book QA 분야에서 기존 프롬프팅 방법 대비 우수한 성능을 보여주며, 이를 통해 대형 언어 모델에 내재된 지식을 활용하여 질의 응답 능력을 향상시킬 수 있음을 입증한다.

  • PDF

Design of Knowledge-based Spatial Querying System Using Labeled Property Graph and GraphQL (속성 그래프 및 GraphQL을 활용한 지식기반 공간 쿼리 시스템 설계)

  • Jang, Hanme;Kim, Dong Hyeon;Yu, Kiyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.5
    • /
    • pp.429-437
    • /
    • 2022
  • Recently, the demand for a QA (Question Answering) system for human-machine communication has increased. Among the QA systems, a closed domain QA system that can handle spatial-related questions is called GeoQA. In this study, a new type of graph database, LPG (Labeled Property Graph) was used to overcome the limitations of the RDF (Resource Description Framework) based database, which was mainly used in the GeoQA field. In addition, GraphQL (Graph Query Language), an API-type query language, is introduced to address the fact that the LPG query language is not standardized and the GeoQA system may depend on specific products. In this study, database was built so that answers could be retrieved when spatial-related questions were entered. Each data was obtained from the national spatial information portal and local data open service. The spatial relationships between each spatial objects were calculated in advance and stored in edge form. The user's questions were first converted to GraphQL through FOL (First Order Logic) format and delivered to the database through the GraphQL server. The LPG used in the experiment is Neo4j, the graph database that currently has the highest market share, and some of the built-in functions and QGIS were used for spatial calculations. As a result of building the system, it was confirmed that the user's question could be transformed, processed through the Apollo GraphQL server, and an appropriate answer could be obtained from the database.

BERT Sparse: Keyword-based Document Retrieval using BERT in Real time (BERT Sparse: BERT를 활용한 키워드 기반 실시간 문서 검색)

  • Kim, Youngmin;Lim, Seungyoung;Yu, Inguk;Park, Soyoon
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.3-8
    • /
    • 2020
  • 문서 검색은 오래 연구되어 온 자연어 처리의 중요한 분야 중 하나이다. 기존의 키워드 기반 검색 알고리즘 중 하나인 BM25는 성능에 명확한 한계가 있고, 딥러닝을 활용한 의미 기반 검색 알고리즘의 경우 문서가 압축되어 벡터로 변환되는 과정에서 정보의 손실이 생기는 문제가 있다. 이에 우리는 BERT Sparse라는 새로운 문서 검색 모델을 제안한다. BERT Sparse는 쿼리에 포함된 키워드를 활용하여 문서를 매칭하지만, 문서를 인코딩할 때는 BERT를 활용하여 쿼리의 문맥과 의미까지 반영할 수 있도록 고안하여, 기존 키워드 기반 검색 알고리즘의 한계를 극복하고자 하였다. BERT Sparse의 검색 속도는 BM25와 같은 키워드 기반 모델과 유사하여 실시간 서비스가 가능한 수준이며, 성능은 Recall@5 기준 93.87%로, BM25 알고리즘 검색 성능 대비 19% 뛰어나다. 최종적으로 BERT Sparse를 MRC 모델과 결합하여 open domain QA환경에서도 F1 score 81.87%를 얻었다.

  • PDF

Bilinear Graph Neural Network-Based Reasoning for Multi-Hop Question Answering (다중 홉 질문 응답을 위한 쌍 선형 그래프 신경망 기반 추론)

  • Lee, Sangui;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.8
    • /
    • pp.243-250
    • /
    • 2020
  • Knowledge graph-based question answering not only requires deep understanding of the given natural language questions, but it also needs effective reasoning to find the correct answers on a large knowledge graph. In this paper, we propose a deep neural network model for effective reasoning on a knowledge graph, which can find correct answers to complex questions requiring multi-hop inference. The proposed model makes use of highly expressive bilinear graph neural network (BGNN), which can utilize context information between a pair of neighboring nodes, as well as allows bidirectional feature propagation between each entity node and one of its neighboring nodes on a knowledge graph. Performing experiments with an open-domain knowledge base (Freebase) and two natural-language question answering benchmark datasets(WebQuestionsSP and MetaQA), we demonstrate the effectiveness and performance of the proposed model.

Inverse Document Frequency-Based Word Embedding of Unseen Words for Question Answering Systems (질의응답 시스템에서 처음 보는 단어의 역문헌빈도 기반 단어 임베딩 기법)

  • Lee, Wooin;Song, Gwangho;Shim, Kyuseok
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.902-909
    • /
    • 2016
  • Question answering system (QA system) is a system that finds an actual answer to the question posed by a user, whereas a typical search engine would only find the links to the relevant documents. Recent works related to the open domain QA systems are receiving much attention in the fields of natural language processing, artificial intelligence, and data mining. However, the prior works on QA systems simply replace all words that are not in the training data with a single token, even though such unseen words are likely to play crucial roles in differentiating the candidate answers from the actual answers. In this paper, we propose a method to compute vectors of such unseen words by taking into account the context in which the words have occurred. Next, we also propose a model which utilizes inverse document frequencies (IDF) to efficiently process unseen words by expanding the system's vocabulary. Finally, we validate that the proposed method and model improve the performance of a QA system through experiments.