• Title/Summary/Keyword: Bert model

Search Result 210, Processing Time 0.024 seconds

KorPatELECTRA : A Pre-trained Language Model for Korean Patent Literature to improve performance in the field of natural language processing(Korean Patent ELECTRA)

  • Jang, Ji-Mo;Min, Jae-Ok;Noh, Han-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.15-23
    • /
    • 2022
  • In the field of patents, as NLP(Natural Language Processing) is a challenging task due to the linguistic specificity of patent literature, there is an urgent need to research a language model optimized for Korean patent literature. Recently, in the field of NLP, there have been continuous attempts to establish a pre-trained language model for specific domains to improve performance in various tasks of related fields. Among them, ELECTRA is a pre-trained language model by Google using a new method called RTD(Replaced Token Detection), after BERT, for increasing training efficiency. The purpose of this paper is to propose KorPatELECTRA pre-trained on a large amount of Korean patent literature data. In addition, optimal pre-training was conducted by preprocessing the training corpus according to the characteristics of the patent literature and applying patent vocabulary and tokenizer. In order to confirm the performance, KorPatELECTRA was tested for NER(Named Entity Recognition), MRC(Machine Reading Comprehension), and patent classification tasks using actual patent data, and the most excellent performance was verified in all the three tasks compared to comparative general-purpose language models.

Text summarization of dialogue based on BERT

  • Nam, Wongyung;Lee, Jisoo;Jang, Beakcheol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.41-47
    • /
    • 2022
  • In this paper, we propose how to implement text summaries for colloquial data that are not clearly organized. For this study, SAMSum data, which is colloquial data, was used, and the BERTSumExtAbs model proposed in the previous study of the automatic summary model was applied. More than 70% of the SAMSum dataset consists of conversations between two people, and the remaining 30% consists of conversations between three or more people. As a result, by applying the automatic text summarization model to colloquial data, a result of 42.43 or higher was derived in the ROUGE Score R-1. In addition, a high score of 45.81 was derived by fine-tuning the BERTSum model, which was previously proposed as a text summarization model. Through this study, the performance of colloquial generation summary has been proven, and it is hoped that the computer will understand human natural language as it is and be used as basic data to solve various tasks.

Study Comparing the Performance of Linear and Non-linear Models in Recommendation Systems (추천 시스템에서의 선형 모델과 비선형 모델의 성능 비교 연구)

  • Da-Hun Seong;Yujin Lim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.8
    • /
    • pp.388-394
    • /
    • 2024
  • Since recommendation systems play a key role in increasing the revenue of companies, various approaches and models have been studied in the past. However, this diversity also leads to a complexity in the types of recommendation systems, which makes it difficult to select a recommendation model. Therefore, this study aims to solve the difficulty of selecting an appropriate recommendation model for recommendation systems by providing a unified criterion for categorizing various recommendation models and comparing their performance in a unified environment. The experiments utilized MovieLens and Coursera datasets, and the performance of linear models(ADMM-SLIM, EASER, LightGCN) and non-linear models(Caser, BERT4Rec) were evaluated using HR@10 and NDCG@10 metrics. This study will provide researchers and practitioners with useful information for selecting the best model based on dataset characteristics and recommendation context.

An Automated Industry and Occupation Coding System using Deep Learning (딥러닝 기법을 활용한 산업/직업 자동코딩 시스템)

  • Lim, Jungwoo;Moon, Hyeonseok;Lee, Chanhee;Woo, Chankyun;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.4
    • /
    • pp.23-30
    • /
    • 2021
  • An Automated Industry and Occupation Coding System assigns statistical classification code to the enormous amount of natural language data collected from people who write about their industry and occupation. Unlike previous studies that applied information retrieval, we propose a system that does not need an index database and gives proper code regardless of the level of classification. Also, we show our model, which utilized KoBERT that achieves high performance in natural language downstream tasks with deep learning, outperforms baseline. Our method achieves 95.65%, 91.51%, and 97.66% in Occupation/Industry Code Classification of Population and Housing Census, and Industry Code Classification of Census on Basic Characteristics of Establishments. Moreover, we also demonstrate future improvements through error analysis in the respect of data and modeling.

Analysis of interest in non-face-to-face medical counseling of modern people in the medical industry (의료 산업에 있어 현대인의 비대면 의학 상담에 대한 관심도 분석 기법)

  • Kang, Yooseong;Park, Jong Hoon;Oh, Hayoung;Lee, Se Uk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.11
    • /
    • pp.1571-1576
    • /
    • 2022
  • This study aims to analyze the interest of modern people in non-face-to-face medical counseling in the medical industrys. Big data was collected on two social platforms, 지식인, a platform that allows experts to receive medical counseling, and YouTube. In addition to the top five keywords of telephone counseling, "internal medicine", "general medicine", "department of neurology", "department of mental health", and "pediatrics", a data set was built from each platform with a total of eight search terms: "specialist", "medical counseling", and "health information". Afterwards, pre-processing processes such as morpheme classification, disease extraction, and normalization were performed based on the crawled data. Data was visualized with word clouds, broken line graphs, quarterly graphs, and bar graphs by disease frequency based on word frequency. An emotional classification model was constructed only for YouTube data, and the performance of GRU and BERT-based models was compared.

Improving Accuracy of Noise Review Filtering for Places with Insufficient Training Data

  • Hyeon Gyu Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.7
    • /
    • pp.19-27
    • /
    • 2023
  • In the process of collecting social reviews, a number of noise reviews irrelevant to a given search keyword can be included in the search results. To filter out such reviews, machine learning can be used. However, if the number of reviews is insufficient for a target place to be analyzed, filtering accuracy can be degraded due to the lack of training data. To resolve this issue, we propose a supervised learning method to improve accuracy of the noise review filtering for the places with insufficient reviews. In the proposed method, training is not performed by an individual place, but by a group including several places with similar characteristics. The classifier obtained through the training can be used for the noise review filtering of an arbitrary place belonging to the group, so the problem of insufficient training data can be resolved. To verify the proposed method, a noise review filtering model was implemented using LSTM and BERT, and filtering accuracy was checked through experiments using real data collected online. The experimental results show that the accuracy of the proposed method was 92.4% on the average, and it provided 87.5% accuracy when targeting places with less than 100 reviews.

A Study on Auto-Classification of Aviation Safety Data using NLP Algorithm (자연어처리 알고리즘을 이용한 위험기반 항공안전데이터 자동분류 방안 연구)

  • Sung-Hoon Yang;Young Choi;So-young Jung;Joo-hyun Ahn
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.6
    • /
    • pp.528-535
    • /
    • 2022
  • Although the domestic aviation industry has made rapid progress with the development of aircraft manufacturing and transportation technologies, aviation safety accidents continue to occur. The supervisory agency classifies hazards and risks based on risk-based aviation safety data, identifies safety trends for each air transportation operator, and conducts pre-inspections to prevent event and accidents. However, the human classification of data described in natural language format results in different results depending on knowledge, experience, and propensity, and it takes a considerable amount of time to understand and classify the meaning of the content. Therefore, in this journal, the fine-tuned KoBERT model was machine-learned over 5,000 data to predict the classification value of new data, showing 79.2% accuracy. In addition, some of the same result prediction and failed data for similar events were errors caused by human.

A Study on Book Recovery Method Depending on Book Damage Levels Using Book Scan (북스캔을 이용한 도서 손상 단계에 따른 딥 러닝 기반 도서 복구 방법에 관한 연구)

  • Kyungho Seok;Johui Lee;Byeongchan Park;Seok-Yoon Kim;Youngmo Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.4
    • /
    • pp.154-160
    • /
    • 2023
  • Recently, with the activation of eBook services, books are being published simultaneously as physical books and digitized eBooks. Paper books are more expensive than e-books due to printing and distribution costs, so demand for relatively inexpensive e-books is increasing. There are cases where previously published physical books cannot be digitized due to the circumstances of the publisher or author, so there is a movement among individual users to digitize books that have been published for a long time. However, existing research has only studied the advancement of the pre-processing process that can improve text recognition before applying OCR technology, and there are limitations to digitization depending on the condition of the book. Therefore, support for book digitization services depending on the condition of the physical book is needed. need. In this paper, we propose a method to support digitalization services according to the status of physical books held by book owners. Create images by scanning books and extract text information from the images through OCR. We propose a method to recover text that cannot be extracted depending on the state of the book using BERT, a natural language processing deep learning model. As a result, it was confirmed that the recovery method using BERT is superior when compared to RNN, which is widely used in recommendation technology.

  • PDF

Research on Recent Quality Estimation (최신 기계번역 품질 예측 연구)

  • Eo, Sugyeong;Park, Chanjun;Moon, Hyeonseok;Seo, Jaehyung;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.7
    • /
    • pp.37-44
    • /
    • 2021
  • Quality estimation (QE) can evaluate the quality of machine translation output even for those who do not know the target language, and its high utilization highlights the need for QE. QE shared task is held every year at Conference on Machine Translation (WMT), and recently, researches applying Pretrained Language Model (PLM) are mainly being conducted. In this paper, we conduct a survey on the QE task and research trends, and we summarize the features of PLM. In addition, we used a multilingual BART model that has not yet been utilized and performed comparative analysis with the existing studies such as XLM, multilingual BERT, and XLM-RoBERTa. As a result of the experiment, we confirmed which PLM was most effective when applied to QE, and saw the possibility of applying the multilingual BART model to the QE task.

A study on the aspect-based sentiment analysis of multilingual customer reviews (다국어 사용자 후기에 대한 속성기반 감성분석 연구)

  • Sungyoung Ji;Siyoon Lee;Daewoo Choi;Kee-Hoon Kang
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.6
    • /
    • pp.515-528
    • /
    • 2023
  • With the growth of the e-commerce market, consumers increasingly rely on user reviews to make purchasing decisions. Consequently, researchers are actively conducting studies to effectively analyze these reviews. Among the various methods of sentiment analysis, the aspect-based sentiment analysis approach, which examines user reviews from multiple angles rather than solely relying on simple positive or negative sentiments, is gaining widespread attention. Among the various methodologies for aspect-based sentiment analysis, there is an analysis method using a transformer-based model, which is the latest natural language processing technology. In this paper, we conduct an aspect-based sentiment analysis on multilingual user reviews using two real datasets from the latest natural language processing technology model. Specifically, we use restaurant data from the SemEval 2016 public dataset and multilingual user review data from the cosmetic domain. We compare the performance of transformer-based models for aspect-based sentiment analysis and apply various methodologies to improve their performance. Models using multilingual data are expected to be highly useful in that they can analyze multiple languages in one model without building separate models for each language.