• Title/Summary/Keyword: Bert model

Search Result 203, Processing Time 0.026 seconds

Multitask Transformer Model-based Fintech Customer Service Chatbot NLU System with DECO-LGG SSP-based Data (DECO-LGG 반자동 증강 학습데이터 활용 멀티태스크 트랜스포머 모델 기반 핀테크 CS 챗봇 NLU 시스템)

  • Yoo, Gwang-Hoon;Hwang, Chang-Hoe;Yoon, Jeong-Woo;Nam, Jee-Sun
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.461-466
    • /
    • 2021
  • 본 연구에서는 DECO(Dictionnaire Electronique du COreen) 한국어 전자사전과 LGG(Local-Grammar Graph)에 기반한 반자동 언어데이터 증강(Semi-automatic Symbolic Propagation: SSP) 방식에 입각하여, 핀테크 분야의 CS(Customer Service) 챗봇 NLU(Natural Language Understanding)을 위한 주석 학습 데이터를 효과적으로 생성하고, 이를 기반으로 RASA 오픈 소스에서 제공하는 DIET(Dual Intent and Entity Transformer) 아키텍처를 활용하여 핀테크 CS 챗봇 NLU 시스템을 구현하였다. 실 데이터을 통해 확인된 핀테크 분야의 32가지의 토픽 유형 및 38가지의 핵심 이벤트와 10가지 담화소 구성에 따라, DECO-LGG 데이터 생성 모듈은 질의 및 불만 화행에 대한 양질의 주석 학습 데이터를 효과적으로 생성하며, 이를 의도 분류 및 Slot-filling을 위한 개체명 인식을 종합적으로 처리하는 End to End 방식의 멀티태스크 트랜스포머 모델 DIET로 학습함으로써 DIET-only F1-score 0.931(Intent)/0.865(Slot/Entity), DIET+KoBERT F1-score 0.951(Intent)/0.901(Slot/Entity)의 성능을 확인하였으며, DECO-LGG 기반의 SSP 생성 데이터의 학습 데이터로서의 효과성과 함께 KoBERT에 기반한 DIET 모델 성능의 우수성을 입증하였다.

  • PDF

Speaker classification and prediction with language model (언어모델을 활용한 문서 내 발화자 예측 분류 모델)

  • Kim, Gyeongmin;Han, Seunggyu;Seo, Jaehyung;Lee, Chanhee;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.317-320
    • /
    • 2020
  • 연설문은 구어체와 문어체 두 가지 특성을 모두 갖고 있는 복합적인 데이터 형태이다. 발화자의 문장 표현, 배열, 그리고 결합에 따라 그 구조가 다르기 때문에, 화자 별 갖는 문체적 특성 또한 모두 다르다. 국정을 다루는 정치인들의 연설문은 국정 현황을 포함한 다양한 주요 문제점을 다룬다. 그러면 발화자의 문서 내 문체적 특성을 고려할 경우, 해당 문서가 어느 정치인의 연설문인지 파악 할 수 있는가? 본 연구에서는 대한민국 정책 브리핑 사이트로부터 한국어 기반 사전 학습된 언어 모델을 활용하여 연설문에 대한 미세조정을 진행함으로써 발화자 예측 분류 모델을 생성하고, 그 가능성을 입증하고자 한다. 본 연구는 5-cross validation으로 모델 성능을 평가하였고 KoBERT, KoGPT2 모델에서 각각 90.22%, 84.41% 정확도를 보였다.

  • PDF

Stock prediction using combination of BERT sentiment Analysis and Macro economy index

  • Jang, Euna;Choi, HoeRyeon;Lee, HongChul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.47-56
    • /
    • 2020
  • The stock index is used not only as an economic indicator for a country, but also as an indicator for investment judgment, which is why research into predicting the stock index is ongoing. The task of predicting the stock price index involves technical, basic, and psychological factors, and it is also necessary to consider complex factors for prediction accuracy. Therefore, it is necessary to study the model for predicting the stock price index by selecting and reflecting technical and auxiliary factors that affect the fluctuation of the stock price according to the stock price. Most of the existing studies related to this are forecasting studies that use news information or macroeconomic indicators that create market fluctuations, or reflect only a few combinations of indicators. In this paper, this we propose to present an effective combination of the news information sentiment analysis and various macroeconomic indicators in order to predict the US Dow Jones Index. After Crawling more than 93,000 business news from the New York Times for two years, the sentiment results analyzed using the latest natural language processing techniques BERT and NLTK, along with five macroeconomic indicators, gold prices, oil prices, and five foreign exchange rates affecting the US economy Combination was applied to the prediction algorithm LSTM, which is known to be the most suitable for combining numeric and text information. As a result of experimenting with various combinations, the combination of DJI, NLTK, BERT, OIL, GOLD, and EURUSD in the DJI index prediction yielded the smallest MSE value.

Design and Implementation of AI Recommendation Platform for Commercial Services

  • Jong-Eon Lee
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.202-207
    • /
    • 2023
  • In this paper, we discuss the design and implementation of a recommendation platform actually built in the field. We survey deep learning-based recommendation models that are effective in reflecting individual user characteristics. The recently proposed RNN-based sequential recommendation models reflect individual user characteristics well. The recommendation platform we proposed has an architecture that can collect, store, and process big data from a company's commercial services. Our recommendation platform provides service providers with intuitive tools to evaluate and apply timely optimized recommendation models. In the model evaluation we performed, RNN-based sequential recommendation models showed high scores.

Privacy-Preserving Language Model Fine-Tuning Using Offsite Tuning (프라이버시 보호를 위한 오프사이트 튜닝 기반 언어모델 미세 조정 방법론)

  • Jinmyung Jeong;Namgyu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.165-184
    • /
    • 2023
  • Recently, Deep learning analysis of unstructured text data using language models, such as Google's BERT and OpenAI's GPT has shown remarkable results in various applications. Most language models are used to learn generalized linguistic information from pre-training data and then update their weights for downstream tasks through a fine-tuning process. However, some concerns have been raised that privacy may be violated in the process of using these language models, i.e., data privacy may be violated when data owner provides large amounts of data to the model owner to perform fine-tuning of the language model. Conversely, when the model owner discloses the entire model to the data owner, the structure and weights of the model are disclosed, which may violate the privacy of the model. The concept of offsite tuning has been recently proposed to perform fine-tuning of language models while protecting privacy in such situations. But the study has a limitation that it does not provide a concrete way to apply the proposed methodology to text classification models. In this study, we propose a concrete method to apply offsite tuning with an additional classifier to protect the privacy of the model and data when performing multi-classification fine-tuning on Korean documents. To evaluate the performance of the proposed methodology, we conducted experiments on about 200,000 Korean documents from five major fields, ICT, electrical, electronic, mechanical, and medical, provided by AIHub, and found that the proposed plug-in model outperforms the zero-shot model and the offsite model in terms of classification accuracy.

Method of Extracting the Topic Sentence Considering Sentence Importance based on ELMo Embedding (ELMo 임베딩 기반 문장 중요도를 고려한 중심 문장 추출 방법)

  • Kim, Eun Hee;Lim, Myung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.39-46
    • /
    • 2021
  • This study is about a method of extracting a summary from a news article in consideration of the importance of each sentence constituting the article. We propose a method of calculating sentence importance by extracting the probabilities of topic sentence, similarity with article title and other sentences, and sentence position as characteristics that affect sentence importance. At this time, a hypothesis is established that the Topic Sentence will have a characteristic distinct from the general sentence, and a deep learning-based classification model is trained to obtain a topic sentence probability value for the input sentence. Also, using the pre-learned ELMo language model, the similarity between sentences is calculated based on the sentence vector value reflecting the context information and extracted as sentence characteristics. The topic sentence classification performance of the LSTM and BERT models was 93% accurate, 96.22% recall, and 89.5% precision, resulting in high analysis results. As a result of calculating the importance of each sentence by combining the extracted sentence characteristics, it was confirmed that the performance of extracting the topic sentence was improved by about 10% compared to the existing TextRank algorithm.

Sentence Filtering Dataset Construction Method about Web Corpus (웹 말뭉치에 대한 문장 필터링 데이터 셋 구축 방법)

  • Nam, Chung-Hyeon;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.11
    • /
    • pp.1505-1511
    • /
    • 2021
  • Pretrained models with high performance in various tasks within natural language processing have the advantage of learning the linguistic patterns of sentences using large corpus during the training, allowing each token in the input sentence to be represented with appropriate feature vectors. One of the methods of constructing a corpus required for a pre-trained model training is a collection method using web crawler. However, sentences that exist on web may contain unnecessary words in some or all of the sentences because they have various patterns. In this paper, we propose a dataset construction method for filtering sentences containing unnecessary words using neural network models for corpus collected from the web. As a result, we construct a dataset containing a total of 2,330 sentences. We also evaluated the performance of neural network models on the constructed dataset, and the BERT model showed the highest performance with an accuracy of 93.75%.

Deep Learning-based Target Masking Scheme for Understanding Meaning of Newly Coined Words (신조어의 의미 학습을 위한 딥러닝 기반 표적 마스킹 기법)

  • Nam, Gun-Min;Seo, Sumin;Kwahk, Kee-Young;Kim, Namgyu
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.391-394
    • /
    • 2021
  • 최근 딥러닝(Deep Learning)을 활용하여 텍스트로 표현된 단어나 문장의 의미를 파악하기 위한 다양한 연구가 활발하게 수행되고 있다. 하지만, 딥러닝을 통해 특정 도메인에서 사용되는 언어를 이해하기 위해서는 해당 도메인의 충분한 데이터에 대해 오랜 시간 학습이 수행되어야 한다는 어려움이 있다. 이러한 어려움을 극복하고자, 최근에는 방대한 양의 데이터에 대한 학습 결과인 사전 학습 언어 모델(Pre-trained Language Model)을 다른 도메인의 학습에 적용하는 방법이 딥러닝 연구에서 많이 사용되고 있다. 이들 접근법은 사전 학습을 통해 단어의 일반적인 의미를 학습하고, 이후에 단어가 특정 도메인에서 갖는 의미를 파악하기 위해 추가적인 학습을 진행한다. 추가 학습에는 일반적으로 대표적인 사전 학습 언어 모델인 BERT의 MLM(Masked Language Model)이 다시 사용되며, 마스크(Mask) 되지 않은 단어들의 의미로부터 마스크 된 단어의 의미를 추론하는 형태로 학습이 이루어진다. 따라서 사전 학습을 통해 의미가 파악되어 있는 단어들이 마스크 되지 않고, 신조어와 같이 의미가 알려져 있지 않은 단어들이 마스크 되는 비율이 높을수록 단어 의미의 학습이 정확하게 이루어지게 된다. 하지만 기존의 MLM은 무작위로 마스크 대상 단어를 선정하므로, 사전 학습을 통해 의미가 파악된 단어와 사전 학습에 포함되지 않아 의미 파악이 이루어지지 않은 신조어가 별도의 구분 없이 마스크에 포함된다. 따라서 본 연구에서는 사전 학습에 포함되지 않았던 신조어에 대해서만 집중적으로 마스킹(Masking)을 수행하는 방안을 제시한다. 이를 통해 신조어의 의미 학습이 더욱 정확하게 이루어질 수 있고, 궁극적으로 이러한 학습 결과를 활용한 후속 분석의 품질도 향상시킬 수 있을 것으로 기대한다. 영화 정보 제공 사이트인 N사로부터 영화 댓글 12만 건을 수집하여 실험을 수행한 결과, 제안하는 신조어 표적 마스킹(NTM: Newly Coined Words Target Masking)이 기존의 무작위 마스킹에 비해 감성 분석의 정확도 측면에서 우수한 성능을 보임을 확인하였다.

  • PDF

Zero-shot Korean Sentiment Analysis with Large Language Models: Comparison with Pre-trained Language Models

  • Soon-Chan Kwon;Dong-Hee Lee;Beak-Cheol Jang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.43-50
    • /
    • 2024
  • This paper evaluates the Korean sentiment analysis performance of large language models like GPT-3.5 and GPT-4 using a zero-shot approach facilitated by the ChatGPT API, comparing them to pre-trained Korean models such as KoBERT. Through experiments utilizing various Korean sentiment analysis datasets in fields like movies, gaming, and shopping, the efficiency of these models is validated. The results reveal that the LMKor-ELECTRA model displayed the highest performance based on F1-score, while GPT-4 particularly achieved high accuracy and F1-scores in movie and shopping datasets. This indicates that large language models can perform effectively in Korean sentiment analysis without prior training on specific datasets, suggesting their potential in zero-shot learning. However, relatively lower performance in some datasets highlights the limitations of the zero-shot based methodology. This study explores the feasibility of using large language models for Korean sentiment analysis, providing significant implications for future research in this area.

A Protein-Protein Interaction Extraction Approach Based on Large Pre-trained Language Model and Adversarial Training

  • Tang, Zhan;Guo, Xuchao;Bai, Zhao;Diao, Lei;Lu, Shuhan;Li, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.771-791
    • /
    • 2022
  • Protein-protein interaction (PPI) extraction from original text is important for revealing the molecular mechanism of biological processes. With the rapid growth of biomedical literature, manually extracting PPI has become more time-consuming and laborious. Therefore, the automatic PPI extraction from the raw literature through natural language processing technology has attracted the attention of the majority of researchers. We propose a PPI extraction model based on the large pre-trained language model and adversarial training. It enhances the learning of semantic and syntactic features using BioBERT pre-trained weights, which are built on large-scale domain corpora, and adversarial perturbations are applied to the embedding layer to improve the robustness of the model. Experimental results showed that the proposed model achieved the highest F1 scores (83.93% and 90.31%) on two corpora with large sample sizes, namely, AIMed and BioInfer, respectively, compared with the previous method. It also achieved comparable performance on three corpora with small sample sizes, namely, HPRD50, IEPA, and LLL.