• Title/Summary/Keyword: BERT Model

Search Result 203, Processing Time 0.028 seconds

Proposal of Punctuation Mark Filling Task with BERT-based Model (BERT 기반 문장부호 자동 완성 모델)

  • Han, Seunggyu;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.263-266
    • /
    • 2020
  • 문장 부호는 그 중요성에 비해 자연어 처리 분야에서 모델의 학습 효율을 위해 삭제되는 등 잘 연구되지 않았던 분야이다. 본 논문에서는 대한민국 정부에서 공식적으로 공개한 연설문을 수집한 말뭉치를 바탕으로 한국어의 문장 부호를 처리하는 BERT 기반의 fine-tuning 모델을 제시한다. BERT 기반 모델에서 토큰별로 예측하는 본 모델은 쉼표와 마침표만을 예측하는 경우 0.81, 물음표까지 예측하는 경우 0.66, 느낌표까지 예측하는 경우 0.52의 F1-Score를 보였다.

  • PDF

Comparison of Korean Classification Models' Korean Essay Score Range Prediction Performance (한국어 학습 모델별 한국어 쓰기 답안지 점수 구간 예측 성능 비교)

  • Cho, Heeryon;Im, Hyeonyeol;Yi, Yumi;Cha, Junwoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.133-140
    • /
    • 2022
  • We investigate the performance of deep learning-based Korean language models on a task of predicting the score range of Korean essays written by foreign students. We construct a data set containing a total of 304 essays, which include essays discussing the criteria for choosing a job ('job'), conditions of a happy life ('happ'), relationship between money and happiness ('econ'), and definition of success ('succ'). These essays were labeled according to four letter grades (A, B, C, and D), and a total of eleven essay score range prediction experiments were conducted (i.e., five for predicting the score range of 'job' essays, five for predicting the score range of 'happiness' essays, and one for predicting the score range of mixed topic essays). Three deep learning-based Korean language models, KoBERT, KcBERT, and KR-BERT, were fine-tuned using various training data. Moreover, two traditional probabilistic machine learning classifiers, naive Bayes and logistic regression, were also evaluated. Experiment results show that deep learning-based Korean language models performed better than the two traditional classifiers, with KR-BERT performing the best with 55.83% overall average prediction accuracy. A close second was KcBERT (55.77%) followed by KoBERT (54.91%). The performances of naive Bayes and logistic regression classifiers were 52.52% and 50.28% respectively. Due to the scarcity of training data and the imbalance in class distribution, the overall prediction performance was not high for all classifiers. Moreover, the classifiers' vocabulary did not explicitly capture the error features that were helpful in correctly grading the Korean essay. By overcoming these two limitations, we expect the score range prediction performance to improve.

Methodology for Deriving Required Quality of Product Using Analysis of Customer Reviews (사용자 리뷰 분석을 통한 제품 요구품질 도출 방법론)

  • Yerin Yu;Jeongeun Byun;Kuk Jin Bae;Sumin Seo;Younha Kim;Namgyu Kim
    • Journal of Information Technology Applications and Management
    • /
    • v.30 no.2
    • /
    • pp.1-18
    • /
    • 2023
  • Recently, as technology development has accelerated and product life cycles have been shortened, it is necessary to derive key product features from customers in the R&D planning and evaluation stage. More companies want differentiated competitiveness by providing consumer-tailored products based on big data and artificial intelligence technology. To achieve this, the need to correctly grasp the required quality, which is a requirement of consumers, is increasing. However, the existing methods are centered on suppliers or domain experts, so there is a gap from the actual perspective of consumers. In other words, product attributes were defined by suppliers or field experts, but this may not consider consumers' actual perspective. Accordingly, the demand for deriving the product's main attributes through reviews containing consumers' perspectives has recently increased. Therefore, we propose a review data analysis-based required quality methodology containing customer requirements. Specifically, a pre-training language model with a good understanding of Korean reviews was established, consumer intent was correctly identified, and key contents were extracted from the review through a combination of KeyBERT and topic modeling to derive the required quality for each product. RevBERT, a Korean review domain-specific pre-training language model, was established through further pre-training. By comparing the existing pre-training language model KcBERT, we confirmed that RevBERT had a deeper understanding of customer reviews. In addition, all processes other than that of selecting the required quality were linked to the automation process, resulting in the automation of deriving the required quality based on data.

Incorporating BERT-based NLP and Transformer for An Ensemble Model and its Application to Personal Credit Prediction

  • Sophot Ky;Ju-Hong Lee;Kwangtek Na
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.9-15
    • /
    • 2024
  • Tree-based algorithms have been the dominant methods used build a prediction model for tabular data. This also includes personal credit data. However, they are limited to compatibility with categorical and numerical data only, and also do not capture information of the relationship between other features. In this work, we proposed an ensemble model using the Transformer architecture that includes text features and harness the self-attention mechanism to tackle the feature relationships limitation. We describe a text formatter module, that converts the original tabular data into sentence data that is fed into FinBERT along with other text features. Furthermore, we employed FT-Transformer that train with the original tabular data. We evaluate this multi-modal approach with two popular tree-based algorithms known as, Random Forest and Extreme Gradient Boosting, XGBoost and TabTransformer. Our proposed method shows superior Default Recall, F1 score and AUC results across two public data sets. Our results are significant for financial institutions to reduce the risk of financial loss regarding defaulters.

Transformer-based Language model Bert And GPT-2 Performance Comparison Study (Transformer기반의 언어모델 Bert와 GPT-2 성능 비교 연구)

  • Yoo, Yean-Jun;Hong, Seok-Min;Lee, Hyeop-Geon;Kim, Young-Woone
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.381-383
    • /
    • 2022
  • 최근 자연어처리 분야에서는 Bert, GPT 등 Transformer기반의 언어모델 연구가 활발히 이뤄지고 있다. 이러한 언어모델은 대용량의 말뭉치 데이터와 많은 파라미터를 이용하여 사전학습을 진행하여 다양한 자연어처리 테스트에서 높은 성능을 보여주고 있다. 이에 본 논문에서는 Transformer기반의 언어모델인 Bert와 GPT-2의 성능평가를 진행한다. 성능평가는 '네이버 영화 리뷰' 데이터 셋을 통해 긍정 부정의 정확도와 학습시간을 측정한다. 측정결과 정확도에서는 GPT-2가 Bert보다 최소 4.16%에서 최대 5.32% 높은 정확도를 나타내었지만 학습시간에서는 Bert가 GPT-2보다 최소 104초에서 116초 빠르게 나타났다. 향후 성능 비교는 더 많은 데이터와 다양한 조건을 통해 구체적인 성능 비교가 필요하다.

A WWMBERT-based Method for Improving Chinese Text Classification Task (중국어 텍스트 분류 작업의 개선을 위한 WWMBERT 기반 방식)

  • Wang, Xinyuan;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.408-410
    • /
    • 2021
  • In the NLP field, the pre-training model BERT launched by the Google team in 2018 has shown amazing results in various tasks in the NLP field. Subsequently, many variant models have been derived based on the original BERT, such as RoBERTa, ERNIEBERT and so on. In this paper, the WWMBERT (Whole Word Masking BERT) model suitable for Chinese text tasks was used as the baseline model of our experiment. The experiment is mainly for "Text-level Chinese text classification tasks" are improved, which mainly combines Tapt (Task-Adaptive Pretraining) and "Multi-Sample Dropout method" to improve the model, and compare the experimental results, experimental data sets and model scoring standards Both are consistent with the official WWMBERT model using Accuracy as the scoring standard. The official WWMBERT model uses the maximum and average values of multiple experimental results as the experimental scores. The development set was 97.70% (97.50%) on the "text-level Chinese text classification task". and 97.70% (97.50%) of the test set. After comparing the results of the experiments in this paper, the development set increased by 0.35% (0.5%) and the test set increased by 0.31% (0.48%). The original baseline model has been significantly improved.

Optimizing Language Models through Dataset-Specific Post-Training: A Focus on Financial Sentiment Analysis (데이터 세트별 Post-Training을 통한 언어 모델 최적화 연구: 금융 감성 분석을 중심으로)

  • Hui Do Jung;Jae Heon Kim;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.57-67
    • /
    • 2024
  • This research investigates training methods for large language models to accurately identify sentiments and comprehend information about increasing and decreasing fluctuations in the financial domain. The main goal is to identify suitable datasets that enable these models to effectively understand expressions related to financial increases and decreases. For this purpose, we selected sentences from Wall Street Journal that included relevant financial terms and sentences generated by GPT-3.5-turbo-1106 for post-training. We assessed the impact of these datasets on language model performance using Financial PhraseBank, a benchmark dataset for financial sentiment analysis. Our findings demonstrate that post-training FinBERT, a model specialized in finance, outperformed the similarly post-trained BERT, a general domain model. Moreover, post-training with actual financial news proved to be more effective than using generated sentences, though in scenarios requiring higher generalization, models trained on generated sentences performed better. This suggests that aligning the model's domain with the domain of the area intended for improvement and choosing the right dataset are crucial for enhancing a language model's understanding and sentiment prediction accuracy. These results offer a methodology for optimizing language model performance in financial sentiment analysis tasks and suggest future research directions for more nuanced language understanding and sentiment analysis in finance. This research provides valuable insights not only for the financial sector but also for language model training across various domains.

A BERT-Based Deep Learning Approach for Vulnerability Detection (BERT를 이용한 딥러닝 기반 소스코드 취약점 탐지 방법 연구)

  • Jin, Wenhui;Oh, Heekuck
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.6
    • /
    • pp.1139-1150
    • /
    • 2022
  • With the rapid development of SW Industry, softwares are everywhere in our daily life. The number of vulnerabilities are also increasing with a large amount of newly developed code. Vulnerabilities can be exploited by hackers, resulting the disclosure of privacy and threats to the safety of property and life. In particular, since the large numbers of increasing code, manually analyzed by expert is not enough anymore. Machine learning has shown high performance in object identification or classification task. Vulnerability detection is also suitable for machine learning, as a reuslt, many studies tried to use RNN-based model to detect vulnerability. However, the RNN model is also has limitation that as the code is longer, the earlier can not be learned well. In this paper, we proposed a novel method which applied BERT to detect vulnerability. The accuracy was 97.5%, which increased by 1.5%, and the efficiency also increased by 69% than Vuldeepecker.

Hot Keyword Extraction of Sci-tech Periodicals Based on the Improved BERT Model

  • Liu, Bing;Lv, Zhijun;Zhu, Nan;Chang, Dongyu;Lu, Mengxin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1800-1817
    • /
    • 2022
  • With the development of the economy and the improvement of living standards, the hot issues in the subject area have become the main research direction, and the mining of the hot issues in the subject currently has problems such as a large amount of data and a complex algorithm structure. Therefore, in response to this problem, this study proposes a method for extracting hot keywords in scientific journals based on the improved BERT model.It can also provide reference for researchers,and the research method improves the overall similarity measure of the ensemble,introducing compound keyword word density, combining word segmentation, word sense set distance, and density clustering to construct an improved BERT framework, establish a composite keyword heat analysis model based on I-BERT framework.Taking the 14420 articles published in 21 kinds of social science management periodicals collected by CNKI(China National Knowledge Infrastructure) in 2017-2019 as the experimental data, the superiority of the proposed method is verified by the data of word spacing, class spacing, extraction accuracy and recall of hot keywords. In the experimental process of this research, it can be found that the method proposed in this paper has a higher accuracy than other methods in extracting hot keywords, which can ensure the timeliness and accuracy of scientific journals in capturing hot topics in the discipline, and finally pass Use information technology to master popular key words.

Exploiting Korean Language Model to Improve Korean Voice Phishing Detection (한국어 언어 모델을 활용한 보이스피싱 탐지 기능 개선)

  • Boussougou, Milandu Keith Moussavou;Park, Dong-Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.10
    • /
    • pp.437-446
    • /
    • 2022
  • Text classification task from Natural Language Processing (NLP) combined with state-of-the-art (SOTA) Machine Learning (ML) and Deep Learning (DL) algorithms as the core engine is widely used to detect and classify voice phishing call transcripts. While numerous studies on the classification of voice phishing call transcripts are being conducted and demonstrated good performances, with the increase of non-face-to-face financial transactions, there is still the need for improvement using the latest NLP technologies. This paper conducts a benchmarking of Korean voice phishing detection performances of the pre-trained Korean language model KoBERT, against multiple other SOTA algorithms based on the classification of related transcripts from the labeled Korean voice phishing dataset called KorCCVi. The results of the experiments reveal that the classification accuracy on a test set of the KoBERT model outperforms the performances of all other models with an accuracy score of 99.60%.