• Title/Summary/Keyword: Language Processing

Search Result 2,707, Processing Time 0.023 seconds

A Study on the Health Index Based on Degradation Patterns in Time Series Data Using ProphetNet Model (ProphetNet 모델을 활용한 시계열 데이터의 열화 패턴 기반 Health Index 연구)

  • Sun-Ju Won;Yong Soo Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.123-138
    • /
    • 2023
  • The Fourth Industrial Revolution and sensor technology have led to increased utilization of sensor data. In our modern society, data complexity is rising, and the extraction of valuable information has become crucial with the rapid changes in information technology (IT). Recurrent neural networks (RNN) and long short-term memory (LSTM) models have shown remarkable performance in natural language processing (NLP) and time series prediction. Consequently, there is a strong expectation that models excelling in NLP will also excel in time series prediction. However, current research on Transformer models for time series prediction remains limited. Traditional RNN and LSTM models have demonstrated superior performance compared to Transformers in big data analysis. Nevertheless, with continuous advancements in Transformer models, such as GPT-2 (Generative Pre-trained Transformer 2) and ProphetNet, they have gained attention in the field of time series prediction. This study aims to evaluate the classification performance and interval prediction of remaining useful life (RUL) using an advanced Transformer model. The performance of each model will be utilized to establish a health index (HI) for cutting blades, enabling real-time monitoring of machine health. The results are expected to provide valuable insights for machine monitoring, evaluation, and management, confirming the effectiveness of advanced Transformer models in time series analysis when applied in industrial settings.

Recent Research Trend Analysis for the Journal of Society of Korea Industrial and Systems Engineering Using Topic Modeling (토픽모델링을 활용한 한국산업경영시스템학회지의 최근 연구주제 분석)

  • Dong Joon Park;Pyung Hoi Koo;Hyung Sool Oh;Min Yoon
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.170-185
    • /
    • 2023
  • The advent of big data has brought about the need for analytics. Natural language processing (NLP), a field of big data, has received a lot of attention. Topic modeling among NLP is widely applied to identify key topics in various academic journals. The Korean Society of Industrial and Systems Engineering (KSIE) has published academic journals since 1978. To enhance its status, it is imperative to recognize the diversity of research domains. We have already discovered eight major research topics for papers published by KSIE from 1978 to 1999. As a follow-up study, we aim to identify major topics of research papers published in KSIE from 2000 to 2022. We performed topic modeling on 1,742 research papers during this period by using LDA and BERTopic which has recently attracted attention. BERTopic outperformed LDA by providing a set of coherent topic keywords that can effectively distinguish 36 topics found out this study. In terms of visualization techniques, pyLDAvis presented better two-dimensional scatter plots for the intertopic distance map than BERTopic. However, BERTopic provided much more diverse visualization methods to explore the relevance of 36 topics. BERTopic was also able to classify hot and cold topics by presenting 'topic over time' graphs that can identify topic trends over time.

Verification on stock return predictability of text in analyst reports (애널리스트 보고서 텍스트의 주가예측력에 대한 검증)

  • Young-Sun Lee;Akihiko Yamada;Cheol-Won Yang;Hohsuk Noh
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.5
    • /
    • pp.489-499
    • /
    • 2023
  • As sharing of analyst reports became widely available, reports generated by analysts have become a useful tool to reduce difference in financial information between market participants. The quantitative information of analyst reports has been used in many ways to predict stock returns. However, there are relatively few domestic studies on the prediction power of text information in analyst reports to predict stock returns. We test stock return predictability of text in analyst reports by creating variables representing the TONE from the text. To overcome the limitation of the linear-model-assumption-based approach, we use the random-forest-based F-test.

Analyzing employment trends in response to AI exposure: K-shaped labor polarization in Korea (인공지능 노출 정도에 따른 고용 추세 분석: K자형 고용 양극화)

  • Lee, Yeseul;Hwang, Hyeonjun
    • Informatization Policy
    • /
    • v.30 no.3
    • /
    • pp.69-91
    • /
    • 2023
  • The impact of technological advancements on employment is a matter of ongoing debate, with discussions on the effects of AI technology development on employment being particularly scarce. This study employs the natural language processing technique (SBERT) and patents to calculate an occupation-based AI exposure score and to analyze employment trends by group. It proposes a method for calculating the AI exposure score based on the similarity between Korean patent information and US job descriptions and linking SOC(U.S.) and KSCO(Korea). The analysis of domestic AI patent applications and regional employment data in the KOSIS Database since 2013 reveals a K-shaped polarization pattern in Korean employment trends among groups with above and below average levels of AI exposure.

Development of a Fake News Detection Model Using Text Mining and Deep Learning Algorithms (텍스트 마이닝과 딥러닝 알고리즘을 이용한 가짜 뉴스 탐지 모델 개발)

  • Dong-Hoon Lim;Gunwoo Kim;Keunho Choi
    • Information Systems Review
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2021
  • Fake news isexpanded and reproduced rapidly regardless of their authenticity by the characteristics of modern society, called the information age. Assuming that 1% of all news are fake news, the amount of economic costs is reported to about 30 trillion Korean won. This shows that the fake news isvery important social and economic issue. Therefore, this study aims to develop an automated detection model to quickly and accurately verify the authenticity of the news. To this end, this study crawled the news data whose authenticity is verified, and developed fake news prediction models using word embedding (Word2Vec, Fasttext) and deep learning algorithms (LSTM, BiLSTM). Experimental results show that the prediction model using BiLSTM with Word2Vec achieved the best accuracy of 84%.

Study on the Implementation of SBOM(Software Bill Of Materials) in Operational Nuclear Facilities (가동 중 원자력시설의 SBOM(Software Bill Of Materials)구현방안 연구)

  • Do-yeon Kim;Seong-su Yoon;Ieck-chae Euom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.229-244
    • /
    • 2024
  • Recently, supply chain attacks against nuclear facilities such as "Evil PLC" are increasing due to the application of digital technology in nuclear power plants such as the APR1400 reactor. Nuclear supply chain security requires a asset management system that can systematically manage a large number of providers due to the nature of the industry. However, due to the nature of the control system, there is a problem of inconsistent management of attribute information due to the long lifecycle of software assets. In addition, due to the availability of the operational technology, the introduction of automated configuration management is insufficient, and limitations such as input errors exist. This study proposes a systematic asset management system using SBOM(Software Bill Of Materials) and an improvement for input errors using natural language processing techniques.

Research on analysis of articleable advertisements and design of extraction method for articleable advertisements using deep learning

  • Seoksoo Kim;Jae-Young Jung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.6
    • /
    • pp.13-22
    • /
    • 2024
  • There is a need for and positive aspects of article-based advertising, but as exaggerated and disguised information is delivered due to some indiscriminate 'article-based advertisements', readers have difficulty distinguishing between general articles and article-based advertisements, leading to a lot of misinterpretation and confusion of information. is doing Since readers will continue to acquire new information and apply this information at the right time and place to bring a lot of value, it is judged to be even more important to distinguish between accurate general articles and article-like advertisements. Therefore, as differentiated information between general articles and article-like advertisements is needed, as part of this, for readers who have difficulty identifying accurate information due to such indiscriminate article-like advertisements in Internet newspapers, this paper introduces IT and AI technologies. We attempted to present a method that can be solved in terms of a system that incorporates, and this method was designed to extract articleable advertisements using a knowledge-based natural language processing method that finds and refines advertising keywords and deep learning technology.

F_MixBERT: Sentiment Analysis Model using Focal Loss for Imbalanced E-commerce Reviews

  • Fengqian Pang;Xi Chen;Letong Li;Xin Xu;Zhiqiang Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.263-283
    • /
    • 2024
  • Users' comments after online shopping are critical to product reputation and business improvement. These comments, sometimes known as e-commerce reviews, influence other customers' purchasing decisions. To confront large amounts of e-commerce reviews, automatic analysis based on machine learning and deep learning draws more and more attention. A core task therein is sentiment analysis. However, the e-commerce reviews exhibit the following characteristics: (1) inconsistency between comment content and the star rating; (2) a large number of unlabeled data, i.e., comments without a star rating, and (3) the data imbalance caused by the sparse negative comments. This paper employs Bidirectional Encoder Representation from Transformers (BERT), one of the best natural language processing models, as the base model. According to the above data characteristics, we propose the F_MixBERT framework, to more effectively use inconsistently low-quality and unlabeled data and resolve the problem of data imbalance. In the framework, the proposed MixBERT incorporates the MixMatch approach into BERT's high-dimensional vectors to train the unlabeled and low-quality data with generated pseudo labels. Meanwhile, data imbalance is resolved by Focal loss, which penalizes the contribution of large-scale data and easily-identifiable data to total loss. Comparative experiments demonstrate that the proposed framework outperforms BERT and MixBERT for sentiment analysis of e-commerce comments.

A Method for Measuring Inter-Utterance Similarity Considering Various Linguistic Features (다양한 언어적 자질을 고려한 발화간 유사도 측정 방법)

  • Lee, Yeon-Su;Shin, Joong-Hwi;Hong, Gum-Won;Song, Young-In;Lee, Do-Gil;Rim, Hae-Chang
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.1
    • /
    • pp.61-69
    • /
    • 2009
  • This paper presents an improved method measuring inter-utterance similarity in an example-based dialogue system, which searches the most similar utterance in a dialogue database to generate a response to a given user utterance. Unlike general inter-sentence similarity measures, the inter-utterance similarity measure for example-based dialogue system should consider not only word distribution but also various linguistic features, such as affirmation/negation, tense, modality, sentence type, which affects the natural conversation. However, previous approaches do not sufficiently reflect these features. This paper proposes a new utterance similarity measure by analyzing and reflecting various linguistic features to improve performance in accuracy. Also, by considering substitutability of the features, the proposed method can utilize limited number of examples. Experimental results show that the proposed method achieves 10%p improvement in accuracy compared to the previous method.

The new frontier: utilizing ChatGPT to expand craniofacial research

  • Andi Zhang;Ethan Dimock;Rohun Gupta;Kevin Chen
    • Archives of Craniofacial Surgery
    • /
    • v.25 no.3
    • /
    • pp.116-122
    • /
    • 2024
  • Background: Due to the importance of evidence-based research in plastic surgery, the authors of this study aimed to assess the accuracy of ChatGPT in generating novel systematic review ideas within the field of craniofacial surgery. Methods: ChatGPT was prompted to generate 20 novel systematic review ideas for 10 different subcategories within the field of craniofacial surgery. For each topic, the chatbot was told to give 10 "general" and 10 "specific" ideas that were related to the concept. In order to determine the accuracy of ChatGPT, a literature review was conducted using PubMed, CINAHL, Embase, and Cochrane. Results: In total, 200 total systematic review research ideas were generated by ChatGPT. We found that the algorithm had an overall 57.5% accuracy at identifying novel systematic review ideas. ChatGPT was found to be 39% accurate for general topics and 76% accurate for specific topics. Conclusion: Craniofacial surgeons should use ChatGPT as a tool. We found that ChatGPT provided more precise answers with specific research questions than with general questions and helped narrow down the search scope, leading to a more relevant and accurate response. Beyond research purposes, ChatGPT can augment patient consultations, improve healthcare equity, and assist in clinical decision-making. With rapid advancements in artificial intelligence (AI), it is important for plastic surgeons to consider using AI in their clinical practice to improve patient-centered outcomes.