• Title/Summary/Keyword: BART

Search Result 81, Processing Time 0.027 seconds

Comparison of KoBART and KoBERT models for Korean paper summarization (한국어 논문 요약을 위한 KoBART와 KoBERT 모델 비교*)

  • Jaesung Jun;Suan Lee
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.562-564
    • /
    • 2022
  • 통신 기술의 발전으로 일반인들도 다양한 자료들을 인터넷에서 손쉽게 찾아볼 수 있는 시대가 도래하였다. 개인이 접근할 수 있는 정보량이 기하급수적으로 많아 짐에 따라, 이를 효율적으로 요약, 정리하여 보여주는 서비스들의 필요성이 높아지기 시작했다. 본 논문에서는, 자연어 처리 모델인 BART를 40GB 이상의 한국어 텍스트로 미리 학습된 한국어 언어 모델 KoBART를 사용한 한국어 논문 요약 모델을 제안하고, KoBART와 KoBERT 모델의 한국어 논문 요약 성능을 비교한다.

  • PDF

BART with Random Sentence Insertion Noise for Korean Abstractive Summarization (무작위 문장 삽입 노이징을 적용한 BART 기반의 한국어 문서 추상 요약)

  • Park, Juhong;Kwon, Hongseok;Lee, Jong-Hyeok
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.455-458
    • /
    • 2020
  • 문서 요약은 입력 문서의 핵심 내용을 파악하여 짧고 간결한 문장으로 나타내는 과정이다. 최근에는 문서 요약을 위해 사전 학습된 언어 모델을 이용하는 방식이 여럿 제안되고 있지만, 이러한 언어 모델들은 문서 요약의 특성을 고려하지 않고 설계된 입력 노이즈 방식을 사용하는 한계점이 있다. 본 논문에서는 한국어 문서 추상 요약에 사전 학습 언어 모델인 BART를 도입하고, 입력 문서에 무작위 문장을 삽입하는 노이징 방식을 추가하여 문서 추상 요약 모델의 언어 이해 능력을 향상시키는 방법론을 제안한다. 실험 결과, BART를 도입한 문서 요약 모델의 결과는 다른 요약 모델들의 결과에 비해 전반적으로 품질 향상을 보였으며, BART와 함께 무작위 문장을 삽입하는 노이징 방법은 적은 비율로 삽입하는 경우 추가적인 성능 향상을 보였다.

  • PDF

A Study of Pre-trained Language Models for Korean Language Generation (한국어 자연어생성에 적합한 사전훈련 언어모델 특성 연구)

  • Song, Minchae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.309-328
    • /
    • 2022
  • This study empirically analyzed a Korean pre-trained language models (PLMs) designed for natural language generation. The performance of two PLMs - BART and GPT - at the task of abstractive text summarization was compared. To investigate how performance depends on the characteristics of the inference data, ten different document types, containing six types of informational content and creation content, were considered. It was found that BART (which can both generate and understand natural language) performed better than GPT (which can only generate). Upon more detailed examination of the effect of inference data characteristics, the performance of GPT was found to be proportional to the length of the input text. However, even for the longest documents (with optimal GPT performance), BART still out-performed GPT, suggesting that the greatest influence on downstream performance is not the size of the training data or PLMs parameters but the structural suitability of the PLMs for the applied downstream task. The performance of different PLMs was also compared through analyzing parts of speech (POS) shares. BART's performance was inversely related to the proportion of prefixes, adjectives, adverbs and verbs but positively related to that of nouns. This result emphasizes the importance of taking the inference data's characteristics into account when fine-tuning a PLMs for its intended downstream task.

Performance Improvement of Topic Modeling using BART based Document Summarization (BART 기반 문서 요약을 통한 토픽 모델링 성능 향상)

  • Eun Su Kim;Hyun Yoo;Kyungyong Chung
    • Journal of Internet Computing and Services
    • /
    • v.25 no.3
    • /
    • pp.27-33
    • /
    • 2024
  • The environment of academic research is continuously changing due to the increase of information, which raises the need for an effective way to analyze and organize large amounts of documents. In this paper, we propose Performance Improvement of Topic Modeling using BART(Bidirectional and Auto-Regressive Transformers) based Document Summarization. The proposed method uses BART-based document summary model to extract the core content and improve topic modeling performance using LDA(Latent Dirichlet Allocation) algorithm. We suggest an approach to improve the performance and efficiency of LDA topic modeling through document summarization and validate it through experiments. The experimental results show that the BART-based model for summarizing article data captures the important information of the original articles with F1-Scores of 0.5819, 0.4384, and 0.5038 in Rouge-1, Rouge-2, and Rouge-L performance evaluations, respectively. In addition, topic modeling using summarized documents performs about 8.08% better than topic modeling using full text in the performance comparison using the Perplexity metric. This contributes to the reduction of data throughput and improvement of efficiency in the topic modeling process.

BART for Korean Natural Language Processing: Named Entity Recognition, Sentiment Analysis, Semantic role labelling (BART를 이용한 한국어 자연어처리: 개체명 인식, 감성분석, 의미역 결정)

  • Hong, Seung-Yean;Na, Seung-Hoon;Shin, Jong-Hoon;Kim, Young-Kil
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.172-175
    • /
    • 2020
  • 최근 자연어처리는 대용량 코퍼스를 이용하여 언어 모델을 사전 학습하고 fine-tuning을 적용함으로 다양한 태스크에서 최고 성능을 갱신하고 있다. BERT기반의 언어 모델들은 양방향의 Transformer만 모델링 되어 있지만 BART는 양방향의 Transformer와 Auto-Regressive Transformer가 결합되어 사전학습을 진행하는 모델로 본 논문에서는 540MB의 코퍼스를 이용해 한국어 BART 모델을 학습 시키고 여러 한국어 자연어처리 태스크에 적용하여 성능 향상 있음을 보였다.

  • PDF

Fine-tuning of Attention-based BART Model for Text Summarization (텍스트 요약을 위한 어텐션 기반 BART 모델 미세조정)

  • Ahn, Young-Pill;Park, Hyun-Jun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.12
    • /
    • pp.1769-1776
    • /
    • 2022
  • Automatically summarizing long sentences is an important technique. The BART model is one of the widely used models in the summarization task. In general, in order to generate a summarization model of a specific domain, fine-tuning is performed by re-training a language model trained on a large dataset to fit the domain. The fine-tuning is usually done by changing the number of nodes in the last fully connected layer. However, in this paper, we propose a fine-tuning method by adding an attention layer, which has been recently applied to various models and shows good performance. In order to evaluate the performance of the proposed method, various experiments were conducted, such as accumulating layers deeper, fine-tuning without skip connections during the fine tuning process, and so on. As a result, the BART model using two attention layers with skip connection shows the best score.

News Recommendation Exploiting Document Summarization based on Deep Learning (딥러닝 기반의 문서요약기법을 활용한 뉴스 추천)

  • Heu, Jee-Uk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.4
    • /
    • pp.23-28
    • /
    • 2022
  • Recently smart device(such as smart phone and tablet PC) become a role as an information gateway, using of the web news by multiple users from the web portal has been more important things. However, the quantity of creating web news on the web makes hard to catch the information which the user wants and confuse the users cause of the similar and repeated contents. In this paper, we propose the news recommend system using the document summarization based on KoBART which gives the selected news to users from the candidate news on the news portal. As a result, our proposed system shows higher performance and recommending the news efficiently by pre-training and fine-tuning the KoBART using collected news data.

Comparison of tree-based ensemble models for regression

  • Park, Sangho;Kim, Chanmin
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.5
    • /
    • pp.561-589
    • /
    • 2022
  • When multiple classifications and regression trees are combined, tree-based ensemble models, such as random forest (RF) and Bayesian additive regression trees (BART), are produced. We compare the model structures and performances of various ensemble models for regression settings in this study. RF learns bootstrapped samples and selects a splitting variable from predictors gathered at each node. The BART model is specified as the sum of trees and is calculated using the Bayesian backfitting algorithm. Throughout the extensive simulation studies, the strengths and drawbacks of the two methods in the presence of missing data, high-dimensional data, or highly correlated data are investigated. In the presence of missing data, BART performs well in general, whereas RF provides adequate coverage. The BART outperforms in high dimensional, highly correlated data. However, in all of the scenarios considered, the RF has a shorter computation time. The performance of the two methods is also compared using two real data sets that represent the aforementioned situations, and the same conclusion is reached.

Intelligent Korean Sentence Summarization Technique Combining KoBART and GSG (KoBART와 GSG를 결합한 지능형 한국어 문장 요약 기법)

  • Hyeonsol Sim;Hyeonbin Park;Jeeyoung Park;Jaewon Sin;Youngjong Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.698-700
    • /
    • 2023
  • 본 논문에서는 한국어 데이터와 모델링, 추가 평가 지표를 통해 Text Summarization 분야에서 한국어로 좋은 성능을 내기 위한 방식을 제안한다. KoBART의 크기를 키우고 PEGASUS의 GSG를 사용하는 KoBART-GSG 모델을 제안한다. 이때 ASR 모델을 사용하여 한국어 데이터를 구축하고 추가 학습을 진행한다. 또한, 생성된 요약문과 원문에서 Attention 기법으로 키워드와 핵심 문장을 추출하여 지능형 텍스트를 구성하는 새로운 방식을 제안한다. ASR Open API와 제안한 방식을 사용하여 오디오 파일을 텍스트로 변환하고 요약하는 강의나 회의 등 학계와 산업에서 사용할 수 있는 서비스를 제공한다.

Biosenesis of Epstein-Barr Virus MicroRNAs in B Cells (B 세포에서 Epstein-Barr virus microRNA들의 전사 및 성숙)

  • Kim Do Nyun;Oh Sang Taek;Lee Jae Myun;Lee Won-Keun;Lee Suk Kyeong
    • Journal of Life Science
    • /
    • v.15 no.6 s.73
    • /
    • pp.909-915
    • /
    • 2005
  • We investigated microRNA (miRNA) biogenesis of Epstein-Barr virus (EBV) which is the first virus shown to produce viral miRNAs. As expected, expression of all the reported EBV miRNAs were detected by Northen blot in an EBV-infected B cell line, B95-8; BHRF1-1, BHIU1-2, BHRF1-3, BART1, and BART2. The putative EBV pri-miRWAs and pre-miRNAs predicted from the known mature EBV miRNA sequences were detected by RT-PCR in B95-8 cells. Many animal miRNA genes exist as clusters of 2-7 genes and they are expressed polycistronically. As the EBV miRNAs are clustered in two regions of the EBV genome, we examined whether these clustered EBV miRNA genes are also expressed polycistronically. A long polycistronic transcript with the expected size (1602 bp) corresponding to the BHRF1-1~BHRF1-2~BHRF1-3 was amplified. However, any polycistronic transcript containing both BART1 and BART2 was detectable in B95-8. These results suggest that EBV miRNAs may be processed in a similar way with animal miRNAs and that some of the clustered EBV miRNAs can be transcribed polycistronically.