• Title/Summary/Keyword: BERT Model

Search Result 201, Processing Time 0.026 seconds

SARS-CoV-2 Variant Prediction Algorithm Using the Protein-Protein Interaction Model with BERT Mask-Filling (BERT Mask-Filling과 단백질-단백질 상호작용 모델을 이용한 SARS-CoV-2 변이 예측 알고리즘)

  • Kong, Hyunseung
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.283-284
    • /
    • 2021
  • 최근 SARS-CoV-2 백신들의 예방접종이 진행됨에 따라 코로나 19 팬데믹의 종결이 예상되고 있다. 하지만 계속해서 출현 중인 변종 바이러스들은 팬데믹 종결의 위험요소로 남아있다. 본 논문에서는 사전학습된 단백질 BERT와 단백질-단백질 상호작용 모델을 활용한 SARS-CoV-2 스파이크 단백질의 변이 예측 분석 알고리즘을 제안한다. 제안하는 기술은 변이 단백질 서열의 예측과 변이 단백질과 human ACE2 수용체의 친화도에 따른 자연선택으로 이루어진다. 이를 통해 시간이 지나며 나타날 수 있는 변종 바이러스들을 시뮬레이션 할 수 있어 변종 바이러스들의 해결에 기여할 것으로 기대된다.

  • PDF

A Development of Sentiment Analysis Model for Pet Feed Products using BERT (BERT를 활용한 반려동물 사료제품의 감성분석 모델 개발)

  • Kim, Young Woong;Kang, Da Eun;Lee, Dong Kyu;Kim, Geonho;Yoon, Ji Seong;Kim, Geon Woo;Gil, Joon-Min
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.609-611
    • /
    • 2022
  • 본 논문에서는 맞춤형 반려동물 사료제품 추천을 위해 최근의 자연어처리 모델인 KoBERT 모델에 기반하여 반료동물 사료제품에 대한 감성분석 모델을 설계하고 구현한다. 본 논문을 통해 구현된 반려동물 사료제품의 감성분석 모델은 정확도 평가에 대해서 비교적 우수한 성능을 보였으며, 학습과정에 참여하지 않은 새로운 반려동물 사료제품에 대해서 0.93 이상의 정확도를 산출하였다.

Design of Category Classification Model for Food Posts using KoBERT (KoBERT를 활용한 식품 게시글 카테고리 분류 모델의 설계)

  • Tae Min Hyeon;Hui Jin Kim;Eun Zi Lim;Joon-Min Gil
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.572-573
    • /
    • 2023
  • 본 논문에서는 식품 판매 게시글에 대한 카테고리 분류를 위해 자연어처리 모델인 KoBERT 모델에 기반하여 식품 판매글에 대한 카테고리 분류 모델을 설계하고 구현한다. 본 논문을 통해 구현된 식품 판매 게시글의 카테고리 분류 모델은 정확도 평가에 대해서 비교적 우수한 성능을 산출하였다.

Implementation of Git's Commit Message Complex Classification Model for Software Maintenance

  • Choi, Ji-Hoon;Kim, Joon-Yong;Park, Seong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.131-138
    • /
    • 2022
  • Git's commit message is closely related to the project life cycle, and by this characteristic, it can greatly contribute to cost reduction and improvement of work efficiency by identifying risk factors and project status of project operation activities. Among these related fields, there are many studies that classify commit messages as types of software maintenance, and the maximum accuracy among the studies is 87%. In this paper, the purpose of using a solution using the commit classification model is to design and implement a complex classification model that combines several models to increase the accuracy of the previously published models and increase the reliability of the model. In this paper, a dataset was constructed by extracting automated labeling and source changes and trained using the DistillBERT model. As a result of verification, reliability was secured by obtaining an F1 score of 95%, which is 8% higher than the maximum of 87% reported in previous studies. Using the results of this study, it is expected that the reliability of the model will be increased and it will be possible to apply it to solutions such as software and project management.

Simple and effective neural coreference resolution for Korean language

  • Park, Cheoneum;Lim, Joonho;Ryu, Jihee;Kim, Hyunki;Lee, Changki
    • ETRI Journal
    • /
    • v.43 no.6
    • /
    • pp.1038-1048
    • /
    • 2021
  • We propose an end-to-end neural coreference resolution for the Korean language that uses an attention mechanism to point to the same entity. Because Korean is a head-final language, we focused on a method that uses a pointer network based on the head. The key idea is to consider all nouns in the document as candidates based on the head-final characteristics of the Korean language and learn distributions over the referenced entity positions for each noun. Given the recent success of applications using bidirectional encoder representation from transformer (BERT) in natural language-processing tasks, we employed BERT in the proposed model to create word representations based on contextual information. The experimental results indicated that the proposed model achieved state-of-the-art performance in Korean language coreference resolution.

Analyzing Effective Poll Prediction Model Using Social Media (SNS) Data Augmentation (소셜 미디어(SNS) 데이터 증강을 활용한 효과적인 여론조사 예측 모델 분석)

  • Hwang, Sunik;Oh, Hayoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.12
    • /
    • pp.1800-1808
    • /
    • 2022
  • During the election period, many polling agencies survey and distribute the approval ratings for each candidate. In the past, public opinion was expressed through the Internet, mobile SNS, or community, although in the past, people had no choice but to survey the approval rating by relying on opinion polls. Therefore, if the public opinion expressed on the Internet is understood through natural language analysis, it is possible to determine the candidate's approval rate as accurately as the result of the opinion poll. Therefore, this paper proposes a method of inferring the approval rate of candidates during the election period by synthesizing the political comments of users through internet community posting data. In order to analyze the approval rate in the post, I would like to suggest a method for generating the model that has the highest correlation with the actual opinion poll by using the KoBert, KcBert, and KoELECTRA models.

An Intelligent Chatbot Utilizing BERT Model and Knowledge Graph (BERT 모델과 지식 그래프를 활용한 지능형 챗봇)

  • Yoo, SoYeop;Jeong, OkRan
    • The Journal of Society for e-Business Studies
    • /
    • v.24 no.3
    • /
    • pp.87-98
    • /
    • 2019
  • As artificial intelligence is actively studied, it is being applied to various fields such as image, video and natural language processing. The natural language processing, in particular, is being studied to enable computers to understand the languages spoken and spoken by people and is considered one of the most important areas in artificial intelligence technology. In natural language processing, it is a complex, but important to make computers learn to understand a person's common sense and generate results based on the person's common sense. Knowledge graphs, which are linked using the relationship of words, have the advantage of being able to learn common sense easily from computers. However, the existing knowledge graphs are organized only by focusing on specific languages and fields and have limitations that cannot respond to neologisms. In this paper, we propose an intelligent chatbotsystem that collects and analyzed data in real time to build an automatically scalable knowledge graph and utilizes it as the base data. In particular, the fine-tuned BERT-based for relation extraction is to be applied to auto-growing graph to improve performance. And, we have developed a chatbot that can learn human common sense using auto-growing knowledge graph, it verifies the availability and performance of the knowledge graph.

An Accurate Log Object Recognition Technique

  • Jiho, Ju;Byungchul, Tak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.2
    • /
    • pp.89-97
    • /
    • 2023
  • In this paper, we propose factors that make log analysis difficult and design technique for detecting various objects embedded in the logs which helps in the subsequent analysis. In today's IT systems, logs have become a critical source data for many advanced AI analysis techniques. Although logs contain wealth of useful information, it is difficult to directly apply techniques since logs are semi-structured by nature. The factors that interfere with log analysis are various objects such as file path, identifiers, JSON documents, etc. We have designed a BERT-based object pattern recognition algorithm for these objects and performed object identification. Object pattern recognition algorithms are based on object definition, GROK pattern, and regular expression. We find that simple pattern matchings based on known patterns and regular expressions are ineffective. The results show significantly better accuracy than using only the patterns and regular expressions. In addition, in the case of the BERT model, the accuracy of classifying objects reached as high as 99%.

Structured Pruning for Efficient Transformer Model compression (효율적인 Transformer 모델 경량화를 위한 구조화된 프루닝)

  • Eunji Yoo;Youngjoo Lee
    • Transactions on Semiconductor Engineering
    • /
    • v.1 no.1
    • /
    • pp.23-30
    • /
    • 2023
  • With the recent development of Generative AI technology by IT giants, the size of the transformer model is increasing exponentially over trillion won. In order to continuously enable these AI services, it is essential to reduce the weight of the model. In this paper, we find a hardware-friendly structured pruning pattern and propose a lightweight method of the transformer model. Since compression proceeds by utilizing the characteristics of the model algorithm, the size of the model can be reduced and performance can be maintained as much as possible. Experiments show that the structured pruning proposed when pruning GPT-2 and BERT language models shows almost similar performance to fine-grained pruning even in highly sparse regions. This approach reduces model parameters by 80% and allows hardware acceleration in structured form with 0.003% accuracy loss compared to fine-tuned pruning.

Efficient Emotion Classification Method Based on Multimodal Approach Using Limited Speech and Text Data (적은 양의 음성 및 텍스트 데이터를 활용한 멀티 모달 기반의 효율적인 감정 분류 기법)

  • Mirr Shin;Youhyun Shin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.174-180
    • /
    • 2024
  • In this paper, we explore an emotion classification method through multimodal learning utilizing wav2vec 2.0 and KcELECTRA models. It is known that multimodal learning, which leverages both speech and text data, can significantly enhance emotion classification performance compared to methods that solely rely on speech data. Our study conducts a comparative analysis of BERT and its derivative models, known for their superior performance in the field of natural language processing, to select the optimal model for effective feature extraction from text data for use as the text processing model. The results confirm that the KcELECTRA model exhibits outstanding performance in emotion classification tasks. Furthermore, experiments using datasets made available by AI-Hub demonstrate that the inclusion of text data enables achieving superior performance with less data than when using speech data alone. The experiments show that the use of the KcELECTRA model achieved the highest accuracy of 96.57%. This indicates that multimodal learning can offer meaningful performance improvements in complex natural language processing tasks such as emotion classification.