• Title/Summary/Keyword: BERT 모형

Search Result 9, Processing Time 0.022 seconds

Multicriteria Movie Recommendation Model Combining Aspect-based Sentiment Classification Using BERT

  • Lee, Yurin;Ahn, Hyunchul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.201-207
    • /
    • 2022
  • In this paper, we propose a movie recommendation model that uses the users' ratings as well as their reviews. To understand the user's preference from multicriteria perspectives, the proposed model is designed to apply attribute-based sentiment analysis to the reviews. For doing this, it divides the reviews left by customers into multicriteria components according to its implicit attributes, and applies BERT-based sentiment analysis to each of them. After that, our model selectively combines the attributes that each user considers important to CF to generate recommendation results. To validate usefulness of the proposed model, we applied it to the real-world movie recommendation case. Experimental results showed that the accuracy of the proposed model was improved compared to the traditional CF. This study has academic and practical significance since it presents a new approach to select and use models in consideration of individual characteristics, and to derive various attributes from a review instead of evaluating each of them.

A Study on Automatic Classification of Subject Headings Using BERT Model (BERT 모형을 이용한 주제명 자동 분류 연구)

  • Yong-Gu Lee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.57 no.2
    • /
    • pp.435-452
    • /
    • 2023
  • This study experimented with automatic classification of subject headings using BERT-based transfer learning model, and analyzed its performance. This study analyzed the classification performance according to the main class of KDC classification and the category type of subject headings. Six datasets were constructed from Korean national bibliographies based on the frequency of the assignments of subject headings, and titles were used as classification features. As a result, classification performance showed values of 0.6059 and 0.5626 on the micro F1 and macro F1 score, respectively, in the dataset (1,539,076 records) containing 3,506 subject headings. In addition, classification performance by the main class of KDC classification showed good performance in the class General works, Natural science, Technology and Language, and low performance in Religion and Arts. As for the performance by the category type of the subject headings, the categories of plant, legal name and product name showed high performance, whereas national treasure/treasure category showed low performance. In a large dataset, the ratio of subject headings that cannot be assigned increases, resulting in a decrease in final performance, and improvement is needed to increase classification performance for low-frequency subject headings.

Korean Q&A Chatbot for COVID-19 News Domains Using Machine Reading Comprehension (기계 독해를 이용한 COVID-19 뉴스 도메인의 한국어 질의응답 챗봇)

  • Lee, Taemin;Park, Kinam;Park, Jeongbae;Jeong, Younghee;Chae, Jeongmin;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.540-542
    • /
    • 2020
  • 코로나 19와 관련한 다양한 정보 확인 욕구를 충족하기 위해 한국어 뉴스 데이터 기반의 질의응답 챗봇을 설계하고 구현하였다. BM25 기반의 문서 검색기, 사전 언어 모형인 KoBERT 기반의 문서 독해기, 정답 생성기의 세 가지 모듈을 중심으로 시스템을 설계하였다. 뉴스, 위키, 통계 정보를 수집하여 웹 기반의 챗봇 인터페이스로 질의응답이 가능하도록 구현하였다. 구현 결과는 http://demo.tmkor.com:36200/mrcv2 페이지에서 접근 및 사용을 할 수 있다.

  • PDF

Recommender system using BERT sentiment analysis (BERT 기반 감성분석을 이용한 추천시스템)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.1-15
    • /
    • 2021
  • If it is difficult for us to make decisions, we ask for advice from friends or people around us. When we decide to buy products online, we read anonymous reviews and buy them. With the advent of the Data-driven era, IT technology's development is spilling out many data from individuals to objects. Companies or individuals have accumulated, processed, and analyzed such a large amount of data that they can now make decisions or execute directly using data that used to depend on experts. Nowadays, the recommender system plays a vital role in determining the user's preferences to purchase goods and uses a recommender system to induce clicks on web services (Facebook, Amazon, Netflix, Youtube). For example, Youtube's recommender system, which is used by 1 billion people worldwide every month, includes videos that users like, "like" and videos they watched. Recommended system research is deeply linked to practical business. Therefore, many researchers are interested in building better solutions. Recommender systems use the information obtained from their users to generate recommendations because the development of the provided recommender systems requires information on items that are likely to be preferred by the user. We began to trust patterns and rules derived from data rather than empirical intuition through the recommender systems. The capacity and development of data have led machine learning to develop deep learning. However, such recommender systems are not all solutions. Proceeding with the recommender systems, there should be no scarcity in all data and a sufficient amount. Also, it requires detailed information about the individual. The recommender systems work correctly when these conditions operate. The recommender systems become a complex problem for both consumers and sellers when the interaction log is insufficient. Because the seller's perspective needs to make recommendations at a personal level to the consumer and receive appropriate recommendations with reliable data from the consumer's perspective. In this paper, to improve the accuracy problem for "appropriate recommendation" to consumers, the recommender systems are proposed in combination with context-based deep learning. This research is to combine user-based data to create hybrid Recommender Systems. The hybrid approach developed is not a collaborative type of Recommender Systems, but a collaborative extension that integrates user data with deep learning. Customer review data were used for the data set. Consumers buy products in online shopping malls and then evaluate product reviews. Rating reviews are based on reviews from buyers who have already purchased, giving users confidence before purchasing the product. However, the recommendation system mainly uses scores or ratings rather than reviews to suggest items purchased by many users. In fact, consumer reviews include product opinions and user sentiment that will be spent on evaluation. By incorporating these parts into the study, this paper aims to improve the recommendation system. This study is an algorithm used when individuals have difficulty in selecting an item. Consumer reviews and record patterns made it possible to rely on recommendations appropriately. The algorithm implements a recommendation system through collaborative filtering. This study's predictive accuracy is measured by Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Netflix is strategically using the referral system in its programs through competitions that reduce RMSE every year, making fair use of predictive accuracy. Research on hybrid recommender systems combining the NLP approach for personalization recommender systems, deep learning base, etc. has been increasing. Among NLP studies, sentiment analysis began to take shape in the mid-2000s as user review data increased. Sentiment analysis is a text classification task based on machine learning. The machine learning-based sentiment analysis has a disadvantage in that it is difficult to identify the review's information expression because it is challenging to consider the text's characteristics. In this study, we propose a deep learning recommender system that utilizes BERT's sentiment analysis by minimizing the disadvantages of machine learning. This study offers a deep learning recommender system that uses BERT's sentiment analysis by reducing the disadvantages of machine learning. The comparison model was performed through a recommender system based on Naive-CF(collaborative filtering), SVD(singular value decomposition)-CF, MF(matrix factorization)-CF, BPR-MF(Bayesian personalized ranking matrix factorization)-CF, LSTM, CNN-LSTM, GRU(Gated Recurrent Units). As a result of the experiment, the recommender system based on BERT was the best.

Analysis of Resident's Satisfaction and Its Determining Factors on Residential Environment: Using Zigbang's Apartment Review Bigdata and Deeplearning-based BERT Model (주거환경에 대한 거주민의 만족도와 영향요인 분석 - 직방 아파트 리뷰 빅데이터와 딥러닝 기반 BERT 모형을 활용하여 - )

  • Kweon, Junhyeon;Lee, Sugie
    • Journal of the Korean Regional Science Association
    • /
    • v.39 no.2
    • /
    • pp.47-61
    • /
    • 2023
  • Satisfaction on the residential environment is a major factor influencing the choice of residence and migration, and is directly related to the quality of life in the city. As online services of real estate increases, people's evaluation on the residential environment can be easily checked and it is possible to analyze their satisfaction and its determining factors based on their evaluation. This means that a larger amount of evaluation can be used more efficiently than previously used methods such as surveys. This study analyzed the residential environment reviews of about 30,000 apartment residents collected from 'Zigbang', an online real estate service in Seoul. The apartment review of Zigbang consists of an evaluation grade on a 5-point scale and the evaluation content directly described by the dweller. At first, this study labeled apartment reviews as positive and negative based on the scores of recommended reviews that include comprehensive evaluation about apartment. Next, to classify them automatically, developed a model by using Bidirectional Encoder Representations from Transformers(BERT), a deep learning-based natural language processing model. After that, by using SHapley Additive exPlanation(SHAP), extract word tokens that play an important role in the classification of reviews, to derive determining factors of the evaluation of the residential environment. Furthermore, by analyzing related keywords using Word2Vec, priority considerations for improving satisfaction on the residential environment were suggested. This study is meaningful that suggested a model that automatically classifies satisfaction on the residential environment into positive and negative by using apartment review big data and deep learning, which are qualitative evaluation data of residents, so that it's determining factors were derived. The result of analysis can be used as elementary data for improving the satisfaction on the residential environment, and can be used in the future evaluation of the residential environment near the apartment complex, and the design and evaluation of new complexes and infrastructure.

A Survey on Deep Learning-based Pre-Trained Language Models (딥러닝 기반 사전학습 언어모델에 대한 이해와 현황)

  • Sangun Park
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.11-29
    • /
    • 2022
  • Pre-trained language models are the most important and widely used tools in natural language processing tasks. Since those have been pre-trained for a large amount of corpus, high performance can be expected even with fine-tuning learning using a small number of data. Since the elements necessary for implementation, such as a pre-trained tokenizer and a deep learning model including pre-trained weights, are distributed together, the cost and period of natural language processing has been greatly reduced. Transformer variants are the most representative pre-trained language models that provide these advantages. Those are being actively used in other fields such as computer vision and audio applications. In order to make it easier for researchers to understand the pre-trained language model and apply it to natural language processing tasks, this paper describes the definition of the language model and the pre-learning language model, and discusses the development process of the pre-trained language model and especially representative Transformer variants.

Table Question Answering based on Pre-trained Language Model using TAPAS (TAPAS를 이용한 사전학습 언어 모델 기반의 표 질의응답)

  • Cho, Sanghyun;Kim, Minho;Kwon, Hyuk-Chul
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.87-90
    • /
    • 2020
  • 표 질의응답은 반-정형화된 표 데이터에서 질문에 대한 답을 찾는 문제이다. 본 연구에서는 한국어 표 질의응답을 위한 표 데이터에 적합한 TAPAS를 이용한 언어모델 사전학습 방법과 표에서 정답이 있는 셀을 예측하고 선택된 셀에서 정확한 정답의 경계를 예측하기 위한 표 질의응답 모형을 제안한다. 표 사전학습을 위해서 약 10만 개의 표 데이터를 활용했으며, 텍스트 데이터에 사전학습된 BERT 모델을 이용하여 TAPAS를 사전학습한 모델이 가장 좋은 성능을 보였다. 기계독해 모델을 적용했을 때 EM 46.8%, F1 63.8%로 텍스트 텍스트에 사전학습된 모델로 파인튜닝한 것과 비교하여 EM 6.7%, F1 12.9% 향상된 것을 보였다. 표 질의응답 모델의 경우 TAPAS를 통해 생성된 임베딩을 이용하여 행과 열의 임베딩을 추출하고 TAPAS 임베딩, 행과 열의 임베딩을 결합하여 기계독해 모델을 적용했을 때 EM 63.6%, F1 76.0%의 성능을 보였다.

  • PDF

Loss-adjusted Regularization based on Prediction for Improving Robustness in Less Reliable FAQ Datasets (신뢰성이 부족한 FAQ 데이터셋에서의 강건성 개선을 위한 모델의 예측 강도 기반 손실 조정 정규화)

  • Park, Yewon;Yang, Dongil;Kim, Soofeel;Lee, Kangwook
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.18-22
    • /
    • 2019
  • FAQ 분류는 자주 묻는 질문을 범주화하고 사용자 질의에 대해 가장 유사한 클래스를 추론하는 방식으로 진행된다. FAQ 데이터셋은 클래스가 다수 존재하기 때문에 클래스 간 포함 및 연관 관계가 존재하고 특정 데이터가 서로 다른 클래스에 동시에 속할 수 있다는 특징이 있다. 그러나 최근 FAQ 분류는 다중 클래스 분류 방법론을 적용하는 데 그쳤고 FAQ 데이터셋의 특징을 모델에 반영하는 연구는 미미했다. 현 분류 방법론은 이러한 FAQ 데이터셋의 특징을 고려하지 못하기 때문에 정답으로 해석될 수 있는 예측도 오답으로 여기는 경우가 발생한다. 본 논문에서는 신뢰성이 부족한 FAQ 데이터셋에서도 분류를 잘 하기 위해 손실 함수를 조정하는 정규화 기법을 소개한다. 이 정규화 기법은 클래스 간 포함 및 연관 관계를 반영할 수 있도록 오답을 예측한 경우에도 예측 강도에 비례하여 손실을 줄인다. 이는 오답을 높은 확률로 예측할수록 데이터의 신뢰성이 낮을 가능성이 크다고 판단하여 학습을 강하게 하지 않게 하기 위함이다. 실험을 위해서는 다중 클래스 분류에서 가장 좋은 성능을 보이고 있는 모형인 BERT를 이용했으며, 비교 실험을 위한 정규화 방법으로는 통상적으로 사용되는 라벨 스무딩을 채택했다. 실험 결과, 본 연구에서 제안한 방법은 기존 방법보다 성능이 개선되고 보다 안정적으로 학습이 된다는 것을 확인했으며, 데이터의 신뢰성이 부족한 상황에서 효과적으로 분류를 수행함을 알 수 있었다.

  • PDF

Exploration on Tokenization Method of Language Model for Korean Machine Reading Comprehension (한국어 기계 독해를 위한 언어 모델의 효과적 토큰화 방법 탐구)

  • Lee, Kangwook;Lee, Haejun;Kim, Jaewon;Yun, Huiwon;Ryu, Wonho
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.197-202
    • /
    • 2019
  • 토큰화는 입력 텍스트를 더 작은 단위의 텍스트로 분절하는 과정으로 주로 기계 학습 과정의 효율화를 위해 수행되는 전처리 작업이다. 현재까지 자연어 처리 분야 과업에 적용하기 위해 다양한 토큰화 방법이 제안되어 왔으나, 주로 텍스트를 효율적으로 분절하는데 초점을 맞춘 연구만이 이루어져 왔을 뿐, 한국어 데이터를 대상으로 최신 기계 학습 기법을 적용하고자 할 때 적합한 토큰화 방법이 무엇일지 탐구 해보기 위한 연구는 거의 이루어지지 않았다. 본 논문에서는 한국어 데이터를 대상으로 최신 기계 학습 기법인 전이 학습 기반의 자연어 처리 방법론을 적용하는데 있어 가장 적합한 토큰화 방법이 무엇인지 알아보기 위한 탐구 연구를 진행했다. 실험을 위해서는 대표적인 전이 학습 모형이면서 가장 좋은 성능을 보이고 있는 모형인 BERT를 이용했으며, 최종 성능 비교를 위해 토큰화 방법에 따라 성능이 크게 좌우되는 과업 중 하나인 기계 독해 과업을 채택했다. 비교 실험을 위한 토큰화 방법으로는 통상적으로 사용되는 음절, 어절, 형태소 단위뿐만 아니라 최근 각광을 받고 있는 토큰화 방식인 Byte Pair Encoding (BPE)를 채택했으며, 이와 더불어 새로운 토큰화 방법인 형태소 분절 단위 위에 BPE를 적용하는 혼합 토큰화 방법을 제안 한 뒤 성능 비교를 실시했다. 실험 결과, 어휘집 축소 효과 및 언어 모델의 퍼플렉시티 관점에서는 음절 단위 토큰화가 우수한 성능을 보였으나, 토큰 자체의 의미 내포 능력이 중요한 기계 독해 과업의 경우 형태소 단위의 토큰화가 우수한 성능을 보임을 확인할 수 있었다. 또한, BPE 토큰화가 종합적으로 우수한 성능을 보이는 가운데, 본 연구에서 새로이 제안한 형태소 분절과 BPE를 동시에 이용하는 혼합 토큰화 방법이 가장 우수한 성능을 보임을 확인할 수 있었다.

  • PDF