• Title/Summary/Keyword: BERT

Search Result 395, Processing Time 0.029 seconds

Exploration on Tokenization Method of Language Model for Korean Machine Reading Comprehension (한국어 기계 독해를 위한 언어 모델의 효과적 토큰화 방법 탐구)

  • Lee, Kangwook;Lee, Haejun;Kim, Jaewon;Yun, Huiwon;Ryu, Wonho
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.197-202
    • /
    • 2019
  • 토큰화는 입력 텍스트를 더 작은 단위의 텍스트로 분절하는 과정으로 주로 기계 학습 과정의 효율화를 위해 수행되는 전처리 작업이다. 현재까지 자연어 처리 분야 과업에 적용하기 위해 다양한 토큰화 방법이 제안되어 왔으나, 주로 텍스트를 효율적으로 분절하는데 초점을 맞춘 연구만이 이루어져 왔을 뿐, 한국어 데이터를 대상으로 최신 기계 학습 기법을 적용하고자 할 때 적합한 토큰화 방법이 무엇일지 탐구 해보기 위한 연구는 거의 이루어지지 않았다. 본 논문에서는 한국어 데이터를 대상으로 최신 기계 학습 기법인 전이 학습 기반의 자연어 처리 방법론을 적용하는데 있어 가장 적합한 토큰화 방법이 무엇인지 알아보기 위한 탐구 연구를 진행했다. 실험을 위해서는 대표적인 전이 학습 모형이면서 가장 좋은 성능을 보이고 있는 모형인 BERT를 이용했으며, 최종 성능 비교를 위해 토큰화 방법에 따라 성능이 크게 좌우되는 과업 중 하나인 기계 독해 과업을 채택했다. 비교 실험을 위한 토큰화 방법으로는 통상적으로 사용되는 음절, 어절, 형태소 단위뿐만 아니라 최근 각광을 받고 있는 토큰화 방식인 Byte Pair Encoding (BPE)를 채택했으며, 이와 더불어 새로운 토큰화 방법인 형태소 분절 단위 위에 BPE를 적용하는 혼합 토큰화 방법을 제안 한 뒤 성능 비교를 실시했다. 실험 결과, 어휘집 축소 효과 및 언어 모델의 퍼플렉시티 관점에서는 음절 단위 토큰화가 우수한 성능을 보였으나, 토큰 자체의 의미 내포 능력이 중요한 기계 독해 과업의 경우 형태소 단위의 토큰화가 우수한 성능을 보임을 확인할 수 있었다. 또한, BPE 토큰화가 종합적으로 우수한 성능을 보이는 가운데, 본 연구에서 새로이 제안한 형태소 분절과 BPE를 동시에 이용하는 혼합 토큰화 방법이 가장 우수한 성능을 보임을 확인할 수 있었다.

  • PDF

A Survey on Deep Learning-based Pre-Trained Language Models (딥러닝 기반 사전학습 언어모델에 대한 이해와 현황)

  • Sangun Park
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.11-29
    • /
    • 2022
  • Pre-trained language models are the most important and widely used tools in natural language processing tasks. Since those have been pre-trained for a large amount of corpus, high performance can be expected even with fine-tuning learning using a small number of data. Since the elements necessary for implementation, such as a pre-trained tokenizer and a deep learning model including pre-trained weights, are distributed together, the cost and period of natural language processing has been greatly reduced. Transformer variants are the most representative pre-trained language models that provide these advantages. Those are being actively used in other fields such as computer vision and audio applications. In order to make it easier for researchers to understand the pre-trained language model and apply it to natural language processing tasks, this paper describes the definition of the language model and the pre-learning language model, and discusses the development process of the pre-trained language model and especially representative Transformer variants.

Long-Term Wildfire Reconstruction: In Need of Focused and Dedicated Pre-Planning Efforts

  • Harris, William S.;Choi, Jin Ouk;Lim, Jaewon;Lee, Yong-Cheol
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.923-928
    • /
    • 2022
  • Wildfire disasters in the United States impact lives and livelihoods by destroying private homes, businesses, community facilities, and infrastructure. Disaster victims suffer from damaged houses, inadequate shelters, inoperable civil infrastructure, and homelessness coupled with long-term recovery and reconstruction processes. Cities and their neighboring communities require an enormous commitment for a full recovery for as long as disaster recovery processes last. State, county, and municipal governments inherently have the responsibility to establish and provide governance and public services for the benefit and well being of community members. Municipal governments' comprehensive and emergency response plans are the artifacts of planning efforts that guide accomplishing those duties. Typically these plans include preparation and response to natural disasters, including wildfires. The standard wildfire planning includes and outlines (1) a wildfire hazard assessment, (2) response approaches to prevent human injury and minimize damage to physical property, and (3) near- and long-term recovery and reconstruction efforts. There is often a high level of detail in the assessment section, but the level of detail and specificity significantly lessons to general approaches in the long-term recovery subsection. This paper aims to document the extent of wildfire preparedness at the county level in general, focusing on the long-term recovery subsections of municipal plans. Based on the identified challenges, the researchers provide recommendations for better longer-term recovery and reconstruction opportunities: 1) building permit requirements, 2) exploration of the use of modular construction, 3) address through relief from legislative requirements, and 4) early, simple, funding, and the aid application process.

  • PDF

Manufacturing and Characteristic Evaluation of Free space Optical Communication Devices in 5G Mobile Base Stations for Emergency Disaster Response (긴급재난 대응용 5G 이동 기지국을 위한 대기공간 광통신 장치의 제작과 특성평가)

  • Jin-Hyeon Chang
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.5
    • /
    • pp.131-138
    • /
    • 2023
  • In this paper, a free space optical communication device that can be used in a mobile base station of several km or less was fabricated and its characteristics were investigated. To overcome the loss due to atmospheric transmission, an optical fiber amplifier (EDFA) with an output of 23 dBm or more was used. In order to increase the focusing speed and miniaturization of the laser beam, an optical lens was manufactured, and a transmission lens was designed to have beam divergence within the range of 1.5 to 1.8 [mrad]. A PT module that controls PAN/TILT was fabricated in order to reduce pointing errors and effective automatic alignment between transceiver devices. In this study, Reed-Solomon (RS) code was used to maintain the transmission quality above a certain level. It was manufactured to be able to communicate at a wireless distance of 300m in a weather situation with visibility of 300m. For performance measurement, it was measured using BERT and eye pattern analyzer, and it was confirmed that BER can be maintained at 2.5Gbps.

Evaluation of communication effectiveness of cruelty-free fashion brands - A comparative study of brand-led and consumer-perceived images - (크루얼티 프리 패션 브랜드의 커뮤니케이션 성과 분석 - 브랜드 주도적 이미지와 소비자 지각 이미지에 대한 비교 -)

  • Yeong-Hyeon Choi;Sangyung Lee
    • The Research Journal of the Costume Culture
    • /
    • v.32 no.2
    • /
    • pp.247-259
    • /
    • 2024
  • This study assessed the effectiveness of brand image communication on consumer perceptions of cruelty-free fashion brands. Brand messaging data were gathered from postings on the official Instagram accounts of three cruelty-free fashion brands and consumer perception data were gathered from Tweets containing keywords related to each brand. Web crawling and natural language processing were performed using Python and sentiment analysis was conducted using the BERT model. By analyzing Instagram content from Stella McCartney, Patagonia, and Freitag from their inception until 2021, this study found these brands all emphasize environmental aspects but with differing focuses: Stella McCartney on ecological conservation, Patagonia on an active outdoor image, and Freitag on upcycled products. Keyword analysis further indicated consumers perceive these brands in line with their brand messaging: Stella McCartney as high-end and eco-friendly, Patagonia as active and environmentally conscious, and Freitag as centered on recycling. Results based on the assessment of the alignment between brand-driven images and consumer-perceived images and the sentiment evaluation of the brand confirmed the outcomes of brand communication performance. The study revealed a correlation between brand image and positive consumer evaluations, indicating that higher alignment of ethical values leads to more positive consumer assessments. Given that consumers tend to prioritize search keywords over brand concepts, it's important for brands to focus on using visual imagery and promotions to effectively convey brand communication information. These findings highlight the importance of brand communication by emphasizing the connection between ethical brand images and consumer perceptions.

User Playlist-Based Music Recommendation Using Music Metadata Embedding (음원 메타데이터 임베딩을 활용한 사용자 플레이리스트 기반 음악 추천)

  • Kyoung Min Nam;Yu Rim Park;Ji Young Jung;Do Hyun Kim;Hyon Hee Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.8
    • /
    • pp.367-373
    • /
    • 2024
  • The growth of mobile devices and network infrastructure has brought significant changes to the music industry. Online streaming services has allowed music consumption without constraints of time and space, leading to increased consumer engagement in music creation and sharing activities, resulting in a vast accumulation of music data. In this study, we define metadata as "song sentences" by using a user's playlist. To calculate similarity, we embedded them into a high-dimensional vector space using skip-gram with negative sampling algorithm. Performance eva luation results indicated that the recommended music algorithm, utilizing singers, genres, composers, lyricists, arrangers, eras, seasons, emotions, and tag lists, exhibited the highest performance. Unlike conventional recommendation methods based on users' behavioral data, our approach relies on the inherent information of the tracks themselves, potentially addressing the cold start problem and minimizing filter bubble phenomena, thus providing a more convenient music listening experience.

Application of Domain-specific Thesaurus to Construction Documents based on Flow Margin of Semantic Similarity

  • Youmin PARK;Seonghyeon MOON;Jinwoo KIM;Seokho CHI
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.375-382
    • /
    • 2024
  • Large Language Models (LLMs) still encounter challenges in comprehending domain-specific expressions within construction documents. Analogous to humans acquiring unfamiliar expressions from dictionaries, language models could assimilate domain-specific expressions through the use of a thesaurus. Numerous prior studies have developed construction thesauri; however, a practical issue arises in effectively leveraging these resources for instructing language models. Given that the thesaurus primarily outlines relationships between terms without indicating their relative importance, language models may struggle in discerning which terms to retain or replace. This research aims to establish a robust framework for guiding language models using the information from the thesaurus. For instance, a term would be associated with a list of similar terms while also being included in the lists of other related terms. The relative significance among terms could be ascertained by employing similarity scores normalized according to relevance ranks. Consequently, a term exhibiting a positive margin of normalized similarity scores (termed a pivot term) could semantically replace other related terms, thereby enabling LLMs to comprehend domain-specific terms through these pivotal terms. The outcome of this research presents a practical methodology for utilizing domain-specific thesauri to train LLMs and analyze construction documents. Ongoing evaluation involves validating the accuracy of the thesaurus-applied LLM (e.g., S-BERT) in identifying similarities within construction specification provisions. This outcome holds potential for the construction industry by enhancing LLMs' understanding of construction documents and subsequently improving text mining performance and project management efficiency.

Topic Modeling Insomnia Social Media Corpus using BERTopic and Building Automatic Deep Learning Classification Model (BERTopic을 활용한 불면증 소셜 데이터 토픽 모델링 및 불면증 경향 문헌 딥러닝 자동분류 모델 구축)

  • Ko, Young Soo;Lee, Soobin;Cha, Minjung;Kim, Seongdeok;Lee, Juhee;Han, Ji Yeong;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.2
    • /
    • pp.111-129
    • /
    • 2022
  • Insomnia is a chronic disease in modern society, with the number of new patients increasing by more than 20% in the last 5 years. Insomnia is a serious disease that requires diagnosis and treatment because the individual and social problems that occur when there is a lack of sleep are serious and the triggers of insomnia are complex. This study collected 5,699 data from 'insomnia', a community on 'Reddit', a social media that freely expresses opinions. Based on the International Classification of Sleep Disorders ICSD-3 standard and the guidelines with the help of experts, the insomnia corpus was constructed by tagging them as insomnia tendency documents and non-insomnia tendency documents. Five deep learning language models (BERT, RoBERTa, ALBERT, ELECTRA, XLNet) were trained using the constructed insomnia corpus as training data. As a result of performance evaluation, RoBERTa showed the highest performance with an accuracy of 81.33%. In order to in-depth analysis of insomnia social data, topic modeling was performed using the newly emerged BERTopic method by supplementing the weaknesses of LDA, which is widely used in the past. As a result of the analysis, 8 subject groups ('Negative emotions', 'Advice and help and gratitude', 'Insomnia-related diseases', 'Sleeping pills', 'Exercise and eating habits', 'Physical characteristics', 'Activity characteristics', 'Environmental characteristics') could be confirmed. Users expressed negative emotions and sought help and advice from the Reddit insomnia community. In addition, they mentioned diseases related to insomnia, shared discourse on the use of sleeping pills, and expressed interest in exercise and eating habits. As insomnia-related characteristics, we found physical characteristics such as breathing, pregnancy, and heart, active characteristics such as zombies, hypnic jerk, and groggy, and environmental characteristics such as sunlight, blankets, temperature, and naps.

Research on ITB Contract Terms Classification Model for Risk Management in EPC Projects: Deep Learning-Based PLM Ensemble Techniques (EPC 프로젝트의 위험 관리를 위한 ITB 문서 조항 분류 모델 연구: 딥러닝 기반 PLM 앙상블 기법 활용)

  • Hyunsang Lee;Wonseok Lee;Bogeun Jo;Heejun Lee;Sangjin Oh;Sangwoo You;Maru Nam;Hyunsik Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.11
    • /
    • pp.471-480
    • /
    • 2023
  • The Korean construction order volume in South Korea grew significantly from 91.3 trillion won in public orders in 2013 to a total of 212 trillion won in 2021, particularly in the private sector. As the size of the domestic and overseas markets grew, the scale and complexity of EPC (Engineering, Procurement, Construction) projects increased, and risk management of project management and ITB (Invitation to Bid) documents became a critical issue. The time granted to actual construction companies in the bidding process following the EPC project award is not only limited, but also extremely challenging to review all the risk terms in the ITB document due to manpower and cost issues. Previous research attempted to categorize the risk terms in EPC contract documents and detect them based on AI, but there were limitations to practical use due to problems related to data, such as the limit of labeled data utilization and class imbalance. Therefore, this study aims to develop an AI model that can categorize the contract terms based on the FIDIC Yellow 2017(Federation Internationale Des Ingenieurs-Conseils Contract terms) standard in detail, rather than defining and classifying risk terms like previous research. A multi-text classification function is necessary because the contract terms that need to be reviewed in detail may vary depending on the scale and type of the project. To enhance the performance of the multi-text classification model, we developed the ELECTRA PLM (Pre-trained Language Model) capable of efficiently learning the context of text data from the pre-training stage, and conducted a four-step experiment to validate the performance of the model. As a result, the ensemble version of the self-developed ITB-ELECTRA model and Legal-BERT achieved the best performance with a weighted average F1-Score of 76% in the classification of 57 contract terms.

Target-Aspect-Sentiment Joint Detection with CNN Auxiliary Loss for Aspect-Based Sentiment Analysis (CNN 보조 손실을 이용한 차원 기반 감성 분석)

  • Jeon, Min Jin;Hwang, Ji Won;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.1-22
    • /
    • 2021
  • Aspect Based Sentiment Analysis (ABSA), which analyzes sentiment based on aspects that appear in the text, is drawing attention because it can be used in various business industries. ABSA is a study that analyzes sentiment by aspects for multiple aspects that a text has. It is being studied in various forms depending on the purpose, such as analyzing all targets or just aspects and sentiments. Here, the aspect refers to the property of a target, and the target refers to the text that causes the sentiment. For example, for restaurant reviews, you could set the aspect into food taste, food price, quality of service, mood of the restaurant, etc. Also, if there is a review that says, "The pasta was delicious, but the salad was not," the words "steak" and "salad," which are directly mentioned in the sentence, become the "target." So far, in ABSA, most studies have analyzed sentiment only based on aspects or targets. However, even with the same aspects or targets, sentiment analysis may be inaccurate. Instances would be when aspects or sentiment are divided or when sentiment exists without a target. For example, sentences like, "Pizza and the salad were good, but the steak was disappointing." Although the aspect of this sentence is limited to "food," conflicting sentiments coexist. In addition, in the case of sentences such as "Shrimp was delicious, but the price was extravagant," although the target here is "shrimp," there are opposite sentiments coexisting that are dependent on the aspect. Finally, in sentences like "The food arrived too late and is cold now." there is no target (NULL), but it transmits a negative sentiment toward the aspect "service." Like this, failure to consider both aspects and targets - when sentiment or aspect is divided or when sentiment exists without a target - creates a dual dependency problem. To address this problem, this research analyzes sentiment by considering both aspects and targets (Target-Aspect-Sentiment Detection, hereby TASD). This study detected the limitations of existing research in the field of TASD: local contexts are not fully captured, and the number of epochs and batch size dramatically lowers the F1-score. The current model excels in spotting overall context and relations between each word. However, it struggles with phrases in the local context and is relatively slow when learning. Therefore, this study tries to improve the model's performance. To achieve the objective of this research, we additionally used auxiliary loss in aspect-sentiment classification by constructing CNN(Convolutional Neural Network) layers parallel to existing models. If existing models have analyzed aspect-sentiment through BERT encoding, Pooler, and Linear layers, this research added CNN layer-adaptive average pooling to existing models, and learning was progressed by adding additional loss values for aspect-sentiment to existing loss. In other words, when learning, the auxiliary loss, computed through CNN layers, allowed the local context to be captured more fitted. After learning, the model is designed to do aspect-sentiment analysis through the existing method. To evaluate the performance of this model, two datasets, SemEval-2015 task 12 and SemEval-2016 task 5, were used and the f1-score increased compared to the existing models. When the batch was 8 and epoch was 5, the difference was largest between the F1-score of existing models and this study with 29 and 45, respectively. Even when batch and epoch were adjusted, the F1-scores were higher than the existing models. It can be said that even when the batch and epoch numbers were small, they can be learned effectively compared to the existing models. Therefore, it can be useful in situations where resources are limited. Through this study, aspect-based sentiments can be more accurately analyzed. Through various uses in business, such as development or establishing marketing strategies, both consumers and sellers will be able to make efficient decisions. In addition, it is believed that the model can be fully learned and utilized by small businesses, those that do not have much data, given that they use a pre-training model and recorded a relatively high F1-score even with limited resources.