• Title/Summary/Keyword: BERT Embedding Model

Search Result 24, Processing Time 0.022 seconds

A Study on the Performance Analysis of Entity Name Recognition Techniques Using Korean Patent Literature

  • Gim, Jangwon
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.139-151
    • /
    • 2020
  • Entity name recognition is a part of information extraction that extracts entity names from documents and classifies the types of extracted entity names. Entity name recognition technologies are widely used in natural language processing, such as information retrieval, machine translation, and query response systems. Various deep learning-based models exist to improve entity name recognition performance, but studies that compared and analyzed these models on Korean data are insufficient. In this paper, we compare and analyze the performance of CRF, LSTM-CRF, BiLSTM-CRF, and BERT, which are actively used to identify entity names using Korean data. Also, we compare and evaluate whether embedding models, which are variously used in recent natural language processing tasks, can affect the entity name recognition model's performance improvement. As a result of experiments on patent data and Korean corpus, it was confirmed that the BiLSTM-CRF using FastText method showed the highest performance.

Hierarchical Automated Essay Evaluation Model Using Korean Sentence-Bert Embedding (한국어 Sentence-BERT 임베딩을 활용한 자동 쓰기 평가 계층적 구조 모델)

  • Minsoo Cho;Oh Woog Kwon;Young Kil Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.526-530
    • /
    • 2022
  • 자동 쓰기 평가 연구는 쓰기 답안지를 채점하는데 드는 시간과 비용을 절감할 수 있어, 교육 분야에서 큰 관심을 가지고 있다. 본 연구의 목적은 쓰기 답안지의 문서 구조를 효과적으로 학습하여 평가하고, 문장단위의 피드백을 제공하는데 있다. 그 방법으로는 문장 레벨에서 한국어 Sentence-BERT 모델을 활용하여 각 문장을 임베딩하고, LSTM 어텐션 모델을 활용하여 문서 레벨에서 임베딩 문장을 모델링한다. '한국어 쓰기 텍스트-점수 구간 데이터'를 활용하여 해당 모델의 성능 평가를 진행하였으며, 다양한 KoBERT 기반 모델과 비교 평가를 통해 제안하는 모델의 방법론이 효과적임을 입증하였다.

  • PDF

Supervised Learning for Sentence Embedding Model using BERT (BERT를 이용한 지도학습 기반 문장 임베딩 모델)

  • Choi, Gihyeon;Kim, Sihyung;Kim, Harksoo;Kim, Kwanwoo;An, Jaeyoung;Choi, Doojin
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.225-228
    • /
    • 2019
  • 문장 임베딩은 문장의 의미를 잘 표현 할 수 있도록 해당 문장을 벡터화 하는 작업을 말한다. 문장 단위 입력을 사용하는 자연언어처리 작업에서 문장 임베딩은 매우 중요한 부분을 차지한다. 두 문장 사이의 의미관계를 추론하는 자연어 추론 작업을 통하여 학습한 문장 임베딩 모델이 기존의 비지도 학습 기반 문장 임베딩 모델 보다 높은 성능을 보이고 있다. 따라서 본 논문에서는 문장 임베딩 성능을 높이기 위하여 사전 학습된 BERT 모델을 이용한 문장 임베딩 기반 자연어 추론 모델을 제안한다. 문장 임베딩에 대한 성능 척도로 자연어 추론 성능을 사용하였으며 SNLI(Standford Natural Language Inference) 말뭉치를 사용하여 실험한 결과 제안 모델은 0.8603의 정확도를 보였다.

  • PDF

A Deep Learning Model for Disaster Alerts Classification

  • Park, Soonwook;Jun, Hyeyoon;Kim, Yoonsoo;Lee, Soowon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.1-9
    • /
    • 2021
  • Disaster alerts are text messages sent by government to people in the area in the event of a disaster. Since the number of disaster alerts has increased, the number of people who block disaster alerts is increasing as many unnecessary disaster alerts are being received. To solve this problem, this study proposes a deep learning model that automatically classifies disaster alerts by disaster type, and allows only necessary disaster alerts to be received according to the recipient. The proposed model embeds disaster alerts via KoBERT and classifies them by disaster type with LSTM. As a result of classifying disaster alerts using 3 combinations of parts of speech: [Noun], [Noun + Adjective + Verb] and [All parts], and 4 classification models: Proposed model, Keyword classification, Word2Vec + 1D-CNN and KoBERT + FFNN, the proposed model achieved the highest performance with 0.988954 accuracy.

Weibo Disaster Rumor Recognition Method Based on Adversarial Training and Stacked Structure

  • Diao, Lei;Tang, Zhan;Guo, Xuchao;Bai, Zhao;Lu, Shuhan;Li, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.10
    • /
    • pp.3211-3229
    • /
    • 2022
  • To solve the problems existing in the process of Weibo disaster rumor recognition, such as lack of corpus, poor text standardization, difficult to learn semantic information, and simple semantic features of disaster rumor text, this paper takes Sina Weibo as the data source, constructs a dataset for Weibo disaster rumor recognition, and proposes a deep learning model BERT_AT_Stacked LSTM for Weibo disaster rumor recognition. First, add adversarial disturbance to the embedding vector of each word to generate adversarial samples to enhance the features of rumor text, and carry out adversarial training to solve the problem that the text features of disaster rumors are relatively single. Second, the BERT part obtains the word-level semantic information of each Weibo text and generates a hidden vector containing sentence-level feature information. Finally, the hidden complex semantic information of poorly-regulated Weibo texts is learned using a Stacked Long Short-Term Memory (Stacked LSTM) structure. The experimental results show that, compared with other comparative models, the model in this paper has more advantages in recognizing disaster rumors on Weibo, with an F1_Socre of 97.48%, and has been tested on an open general domain dataset, with an F1_Score of 94.59%, indicating that the model has better generalization.

Korean Head-Tail Tokenization and Part-of-Speech Tagging by using Deep Learning (딥러닝을 이용한 한국어 Head-Tail 토큰화 기법과 품사 태깅)

  • Kim, Jungmin;Kang, Seungshik;Kim, Hyeokman
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.4
    • /
    • pp.199-208
    • /
    • 2022
  • Korean is an agglutinative language, and one or more morphemes are combined to form a single word. Part-of-speech tagging method separates each morpheme from a word and attaches a part-of-speech tag. In this study, we propose a new Korean part-of-speech tagging method based on the Head-Tail tokenization technique that divides a word into a lexical morpheme part and a grammatical morpheme part without decomposing compound words. In this method, the Head-Tail is divided by the syllable boundary without restoring irregular deformation or abbreviated syllables. Korean part-of-speech tagger was implemented using the Head-Tail tokenization and deep learning technique. In order to solve the problem that a large number of complex tags are generated due to the segmented tags and the tagging accuracy is low, we reduced the number of tags to a complex tag composed of large classification tags, and as a result, we improved the tagging accuracy. The performance of the Head-Tail part-of-speech tagger was experimented by using BERT, syllable bigram, and subword bigram embedding, and both syllable bigram and subword bigram embedding showed improvement in performance compared to general BERT. Part-of-speech tagging was performed by integrating the Head-Tail tokenization model and the simplified part-of-speech tagging model, achieving 98.99% word unit accuracy and 99.08% token unit accuracy. As a result of the experiment, it was found that the performance of part-of-speech tagging improved when the maximum token length was limited to twice the number of words.

A BERT-Based Deep Learning Approach for Vulnerability Detection (BERT를 이용한 딥러닝 기반 소스코드 취약점 탐지 방법 연구)

  • Jin, Wenhui;Oh, Heekuck
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.6
    • /
    • pp.1139-1150
    • /
    • 2022
  • With the rapid development of SW Industry, softwares are everywhere in our daily life. The number of vulnerabilities are also increasing with a large amount of newly developed code. Vulnerabilities can be exploited by hackers, resulting the disclosure of privacy and threats to the safety of property and life. In particular, since the large numbers of increasing code, manually analyzed by expert is not enough anymore. Machine learning has shown high performance in object identification or classification task. Vulnerability detection is also suitable for machine learning, as a reuslt, many studies tried to use RNN-based model to detect vulnerability. However, the RNN model is also has limitation that as the code is longer, the earlier can not be learned well. In this paper, we proposed a novel method which applied BERT to detect vulnerability. The accuracy was 97.5%, which increased by 1.5%, and the efficiency also increased by 69% than Vuldeepecker.

Method of Extracting the Topic Sentence Considering Sentence Importance based on ELMo Embedding (ELMo 임베딩 기반 문장 중요도를 고려한 중심 문장 추출 방법)

  • Kim, Eun Hee;Lim, Myung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.39-46
    • /
    • 2021
  • This study is about a method of extracting a summary from a news article in consideration of the importance of each sentence constituting the article. We propose a method of calculating sentence importance by extracting the probabilities of topic sentence, similarity with article title and other sentences, and sentence position as characteristics that affect sentence importance. At this time, a hypothesis is established that the Topic Sentence will have a characteristic distinct from the general sentence, and a deep learning-based classification model is trained to obtain a topic sentence probability value for the input sentence. Also, using the pre-learned ELMo language model, the similarity between sentences is calculated based on the sentence vector value reflecting the context information and extracted as sentence characteristics. The topic sentence classification performance of the LSTM and BERT models was 93% accurate, 96.22% recall, and 89.5% precision, resulting in high analysis results. As a result of calculating the importance of each sentence by combining the extracted sentence characteristics, it was confirmed that the performance of extracting the topic sentence was improved by about 10% compared to the existing TextRank algorithm.

A Study on Efficient Natural Language Processing Method based on Transformer (트랜스포머 기반 효율적인 자연어 처리 방안 연구)

  • Seung-Cheol Lim;Sung-Gu Youn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.115-119
    • /
    • 2023
  • The natural language processing models used in current artificial intelligence are huge, causing various difficulties in processing and analyzing data in real time. In order to solve these difficulties, we proposed a method to improve the efficiency of processing by using less memory and checked the performance of the proposed model. The technique applied in this paper to evaluate the performance of the proposed model is to divide the large corpus by adjusting the number of attention heads and embedding size of the BERT[1] model to be small, and the results are calculated by averaging the output values of each forward. In this process, a random offset was assigned to the sentences at every epoch to provide diversity in the input data. The model was then fine-tuned for classification. We found that the split processing model was about 12% less accurate than the unsplit model, but the number of parameters in the model was reduced by 56%.

A Protein-Protein Interaction Extraction Approach Based on Large Pre-trained Language Model and Adversarial Training

  • Tang, Zhan;Guo, Xuchao;Bai, Zhao;Diao, Lei;Lu, Shuhan;Li, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.771-791
    • /
    • 2022
  • Protein-protein interaction (PPI) extraction from original text is important for revealing the molecular mechanism of biological processes. With the rapid growth of biomedical literature, manually extracting PPI has become more time-consuming and laborious. Therefore, the automatic PPI extraction from the raw literature through natural language processing technology has attracted the attention of the majority of researchers. We propose a PPI extraction model based on the large pre-trained language model and adversarial training. It enhances the learning of semantic and syntactic features using BioBERT pre-trained weights, which are built on large-scale domain corpora, and adversarial perturbations are applied to the embedding layer to improve the robustness of the model. Experimental results showed that the proposed model achieved the highest F1 scores (83.93% and 90.31%) on two corpora with large sample sizes, namely, AIMed and BioInfer, respectively, compared with the previous method. It also achieved comparable performance on three corpora with small sample sizes, namely, HPRD50, IEPA, and LLL.