• Title/Summary/Keyword: BERT Model

Search Result 203, Processing Time 0.027 seconds

Named entity recognition using transfer learning and small human- and meta-pseudo-labeled datasets

  • Kyoungman Bae;Joon-Ho Lim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.59-70
    • /
    • 2024
  • We introduce a high-performance named entity recognition (NER) model for written and spoken language. To overcome challenges related to labeled data scarcity and domain shifts, we use transfer learning to leverage our previously developed KorBERT as the base model. We also adopt a meta-pseudo-label method using a teacher/student framework with labeled and unlabeled data. Our model presents two modifications. First, the student model is updated with an average loss from both human- and pseudo-labeled data. Second, the influence of noisy pseudo-labeled data is mitigated by considering feedback scores and updating the teacher model only when below a threshold (0.0005). We achieve the target NER performance in the spoken language domain and improve that in the written language domain by proposing a straightforward rollback method that reverts to the best model based on scarce human-labeled data. Further improvement is achieved by adjusting the label vector weights in the named entity dictionary.

Document Classification Methodology Using Autoencoder-based Keywords Embedding

  • Seobin Yoon;Namgyu Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.35-46
    • /
    • 2023
  • In this study, we propose a Dual Approach methodology to enhance the accuracy of document classifiers by utilizing both contextual and keyword information. Firstly, contextual information is extracted using Google's BERT, a pre-trained language model known for its outstanding performance in various natural language understanding tasks. Specifically, we employ KoBERT, a pre-trained model on the Korean corpus, to extract contextual information in the form of the CLS token. Secondly, keyword information is generated for each document by encoding the set of keywords into a single vector using an Autoencoder. We applied the proposed approach to 40,130 documents related to healthcare and medicine from the National R&D Projects database of the National Science and Technology Information Service (NTIS). The experimental results demonstrate that the proposed methodology outperforms existing methods that rely solely on document or word information in terms of accuracy for document classification.

Comparison and Analysis of Unsupervised Contrastive Learning Approaches for Korean Sentence Representations (한국어 문장 표현을 위한 비지도 대조 학습 방법론의 비교 및 분석)

  • Young Hyun Yoo;Kyumin Lee;Minjin Jeon;Jii Cha;Kangsan Kim;Taeuk Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.360-365
    • /
    • 2022
  • 문장 표현(sentence representation)은 자연어처리 분야 내의 다양한 문제 해결 및 응용 개발에 있어 유용하게 활용될 수 있는 주요한 도구 중 하나이다. 하지만 최근 널리 도입되고 있는 사전 학습 언어 모델(pre-trained language model)로부터 도출한 문장 표현은 이방성(anisotropy)이 뚜렷한 등 그 고유의 특성으로 인해 문장 유사도(Semantic Textual Similarity; STS) 측정과 같은 태스크에서 기대 이하의 성능을 보이는 것으로 알려져 있다. 이러한 문제를 해결하기 위해 대조 학습(contrastive learning)을 사전 학습 언어 모델에 적용하는 연구가 문헌에서 활발히 진행되어 왔으며, 그중에서도 레이블이 없는 데이터를 활용하는 비지도 대조 학습 방법이 주목을 받고 있다. 하지만 대다수의 기존 연구들은 주로 영어 문장 표현 개선에 집중하였으며, 이에 대응되는 한국어 문장 표현에 관한 연구는 상대적으로 부족한 실정이다. 이에 본 논문에서는 대표적인 비지도 대조 학습 방법(ConSERT, SimCSE)을 다양한 한국어 사전 학습 언어 모델(KoBERT, KR-BERT, KLUE-BERT)에 적용하여 문장 유사도 태스크(KorSTS, KLUE-STS)에 대해 평가하였다. 그 결과, 한국어의 경우에도 일반적으로 영어의 경우와 유사한 경향성을 보이는 것을 확인하였으며, 이에 더하여 다음과 같은 새로운 사실을 관측하였다. 첫째, 사용한 비지도 대조 학습 방법 모두에서 KLUE-BERT가 KoBERT, KR-BERT보다 더 안정적이고 나은 성능을 보였다. 둘째, ConSERT에서 소개하는 여러 데이터 증강 방법 중 token shuffling 방법이 전반적으로 높은 성능을 보였다. 셋째, 두 가지 비지도 대조 학습 방법 모두 검증 데이터로 활용한 KLUE-STS 학습 데이터에 대해 성능이 과적합되는 현상을 발견하였다. 결론적으로, 본 연구에서는 한국어 문장 표현 또한 영어의 경우와 마찬가지로 비지도 대조 학습의 적용을 통해 그 성능을 개선할 수 있음을 검증하였으며, 이와 같은 결과가 향후 한국어 문장 표현 연구 발전에 초석이 되기를 기대한다.

  • PDF

Korean Dependency Parsing Using Various Ensemble Models (다양한 앙상블 알고리즘을 이용한 한국어 의존 구문 분석)

  • Jo, Gyeong-Cheol;Kim, Ju-Wan;Kim, Gyun-Yeop;Park, Seong-Jin;Gang, Sang-U
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.543-545
    • /
    • 2019
  • 본 논문은 최신 한국어 의존 구문 분석 모델(Korean dependency parsing model)들과 다양한 앙상블 모델(ensemble model)들을 결합하여 그 성능을 분석한다. 단어 표현은 미리 학습된 워드 임베딩 모델(word embedding model)과 ELMo(Embedding from Language Model), Bert(Bidirectional Encoder Representations from Transformer) 그리고 다양한 추가 자질들을 사용한다. 또한 사용된 의존 구문 분석 모델로는 Stack Pointer Network Model, Deep Biaffine Attention Parser와 Left to Right Pointer Parser를 이용한다. 최종적으로 각 모델의 분석 결과를 앙상블 모델인 Bagging 기법과 XGBoost(Extreme Gradient Boosting) 이용하여 최적의 모델을 제안한다.

  • PDF

A Study on Automatic Classification of Subject Headings Using BERT Model (BERT 모형을 이용한 주제명 자동 분류 연구)

  • Yong-Gu Lee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.57 no.2
    • /
    • pp.435-452
    • /
    • 2023
  • This study experimented with automatic classification of subject headings using BERT-based transfer learning model, and analyzed its performance. This study analyzed the classification performance according to the main class of KDC classification and the category type of subject headings. Six datasets were constructed from Korean national bibliographies based on the frequency of the assignments of subject headings, and titles were used as classification features. As a result, classification performance showed values of 0.6059 and 0.5626 on the micro F1 and macro F1 score, respectively, in the dataset (1,539,076 records) containing 3,506 subject headings. In addition, classification performance by the main class of KDC classification showed good performance in the class General works, Natural science, Technology and Language, and low performance in Religion and Arts. As for the performance by the category type of the subject headings, the categories of plant, legal name and product name showed high performance, whereas national treasure/treasure category showed low performance. In a large dataset, the ratio of subject headings that cannot be assigned increases, resulting in a decrease in final performance, and improvement is needed to increase classification performance for low-frequency subject headings.

Detecting Common Weakness Enumeration(CWE) Based on the Transfer Learning of CodeBERT Model (CodeBERT 모델의 전이 학습 기반 코드 공통 취약점 탐색)

  • Chansol Park;So Young Moon;R. Young Chul Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.431-436
    • /
    • 2023
  • Recently the incorporation of artificial intelligence approaches in the field of software engineering has been one of the big topics. In the world, there are actively studying in two directions: 1) software engineering for artificial intelligence and 2) artificial intelligence for software engineering. We attempt to apply artificial intelligence to software engineering to identify and refactor bad code module areas. To learn the patterns of bad code elements well, we must have many datasets with bad code elements labeled correctly for artificial intelligence in this task. The current problems have insufficient datasets for learning and can not guarantee the accuracy of the datasets that we collected. To solve this problem, when collecting code data, bad code data is collected only for code module areas with high-complexity, not the entire code. We propose a method for exploring common weakness enumeration by learning the collected dataset based on transfer learning of the CodeBERT model. The CodeBERT model learns the corresponding dataset more about common weakness patterns in code. With this approach, we expect to identify common weakness patterns more accurately better than one in traditional software engineering.

Sentiment Analysis and Data Visualization of U.S. Public Companies' Disclosures using BERT (BERT를 활용한 미국 기업 공시에 대한 감성 분석 및 시각화)

  • Kim, Hyo Gon;Yoo, Dong Hee
    • The Journal of Information Systems
    • /
    • v.31 no.3
    • /
    • pp.67-87
    • /
    • 2022
  • Purpose This study quantified companies' views on the COVID-19 pandemic with sentiment analysis of U.S. public companies' disclosures. It aims to provide timely insights to shareholders, investors, and consumers by analyzing and visualizing sentiment changes over time as well as similarities and differences by industry. Design/methodology/approach From more than fifty thousand Form 10-K and Form 10-Q published between 2020 and 2021, we extracted over one million texts related to the COVID-19 pandemic. Using the FinBERT language model fine-tuned in the finance domain, we conducted sentiment analysis of the texts, and we quantified and classified the data into positive, negative, and neutral. In addition, we illustrated the analysis results using various visualization techniques for easy understanding of information. Findings The analysis results indicated that U.S. public companies' overall sentiment changed over time as the COVID-19 pandemic progressed. Positive sentiment gradually increased, and negative sentiment tended to decrease over time, but there was no trend in neutral sentiment. When comparing sentiment by industry, the pattern of changes in the amount of positive and negative sentiment and time-series changes were similar in all industries, but differences among industries were shown in neutral sentiment.

Comparative Study of Keyword Extraction Models in Biomedical Domain (생의학 분야 키워드 추출 모델에 대한 비교 연구)

  • Donghee Lee;Soonchan Kwon;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.77-84
    • /
    • 2023
  • Given the growing volume of biomedical papers, the ability to efficiently extract keywords has become crucial for accessing and responding to important information in the literature. In this study, we conduct a comprehensive evaluation of different unsupervised learning-based models and BERT-based models for keyword extraction in the biomedical field. Our experimental findings reveal that the BioBERT model, trained on biomedical-specific data, achieves the highest performance. This study offers precise and dependable insights to guide forthcoming research in biomedical keyword extraction. By establishing a well-suited experimental framework and conducting thorough comparisons and analyses of diverse models, we have furnished essential information. Furthermore, we anticipate extending our contributions to other domains by providing comparative experiments and practical guidelines for effective keyword extraction.

Dual-scale BERT using multi-trait representations for holistic and trait-specific essay grading

  • Minsoo Cho;Jin-Xia Huang;Oh-Woog Kwon
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.82-95
    • /
    • 2024
  • As automated essay scoring (AES) has progressed from handcrafted techniques to deep learning, holistic scoring capabilities have merged. However, specific trait assessment remains a challenge because of the limited depth of earlier methods in modeling dual assessments for holistic and multi-trait tasks. To overcome this challenge, we explore providing comprehensive feedback while modeling the interconnections between holistic and trait representations. We introduce the DualBERT-Trans-CNN model, which combines transformer-based representations with a novel dual-scale bidirectional encoder representations from transformers (BERT) encoding approach at the document-level. By explicitly leveraging multi-trait representations in a multi-task learning (MTL) framework, our DualBERT-Trans-CNN emphasizes the interrelation between holistic and trait-based score predictions, aiming for improved accuracy. For validation, we conducted extensive tests on the ASAP++ and TOEFL11 datasets. Against models of the same MTL setting, ours showed a 2.0% increase in its holistic score. Additionally, compared with single-task learning (STL) models, ours demonstrated a 3.6% enhancement in average multi-trait performance on the ASAP++ dataset.

Tourist Attraction Classification using Sentence Generation Model and Review Data (문장 생성 모델 학습 및 관광지 리뷰 데이터를 활용한 관광지 분류 기법)

  • Jun-Hyeong Moon;In-Whee Joe
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.745-747
    • /
    • 2023
  • 여러 분야에서 인공지능 모델을 활용한 추천 방법들이 많이 사용되고 있다. 본 논문에서는 관광지의 대중적이고 정확한 추천을 위해 GPT-3 와 같은 생성 모델로 생성한 가상의 리뷰 문장을 통해 KoBERT 모델을 학습했다. 생성한 데이터를 통한 KoBERT 의 학습 정확도는 0.98, 테스트 정확도는 0.81 이고 실제 관광지별 리뷰 데이터를 활용해 관광지를 분류했다.