• Title/Summary/Keyword: topic modeling techniques

Search Result 139, Processing Time 0.031 seconds

A Comparative Study on the Types and its Importance of Trade Claims between China and the United States: Using Text Mining Techniques (중국과 미국의 무역클레임 유형과 중요도 비교 연구 : 텍스트 마이닝 기법을 활용하여)

  • Cheon Yu;Yun-Seop Hwang
    • Korea Trade Review
    • /
    • v.47 no.3
    • /
    • pp.177-190
    • /
    • 2022
  • This study is designed to identify the differences in the types and importance of trade claims at the national level. For analysis data, abstracts of arbitration and court judgments published on the website of the United Nations Commission on International Trade Law are collected and used. The target countries are China and the United States, with 102 cases from China and 59 cases from the United States. By applying topic modeling techniques to the collection decisions of China and the United States, trade claims are categorized, and the importance of each type is identified using the network centrality index derived through semantic network analysis. The analysis results are as follows. First, the main types of trade claims were the same for both the United States and China: product nonconformity, delivery issues, and payments. However, in China, the order of product nonconformity > delivery issues > payments was important, and in the United States, payments > product nonconformity > delivery issues were found to be important. This study is significant in that it presents a strategic trade claim management plan using a quantitative methodology.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Fake News Detection for Korean News Using Text Mining and Machine Learning Techniques (텍스트 마이닝과 기계 학습을 이용한 국내 가짜뉴스 예측)

  • Yun, Tae-Uk;Ahn, Hyunchul
    • Journal of Information Technology Applications and Management
    • /
    • v.25 no.1
    • /
    • pp.19-32
    • /
    • 2018
  • Fake news is defined as the news articles that are intentionally and verifiably false, and could mislead readers. Spread of fake news may provoke anxiety, chaos, fear, or irrational decisions of the public. Thus, detecting fake news and preventing its spread has become very important issue in our society. However, due to the huge amount of fake news produced every day, it is almost impossible to identify it by a human. Under this context, researchers have tried to develop automated fake news detection method using Artificial Intelligence techniques over the past years. But, unfortunately, there have been no prior studies proposed an automated fake news detection method for Korean news. In this study, we aim to detect Korean fake news using text mining and machine learning techniques. Our proposed method consists of two steps. In the first step, the news contents to be analyzed is convert to quantified values using various text mining techniques (Topic Modeling, TF-IDF, and so on). After that, in step 2, classifiers are trained using the values produced in step 1. As the classifiers, machine learning techniques such as multiple discriminant analysis, case based reasoning, artificial neural networks, and support vector machine can be applied. To validate the effectiveness of the proposed method, we collected 200 Korean news from Seoul National University's FactCheck (http://factcheck.snu.ac.kr). which provides with detailed analysis reports from about 20 media outlets and links to source documents for each case. Using this dataset, we will identify which text features are important as well as which classifiers are effective in detecting Korean fake news.

An Automatically Extracting Formal Information from Unstructured Security Intelligence Report (비정형 Security Intelligence Report의 정형 정보 자동 추출)

  • Hur, Yuna;Lee, Chanhee;Kim, Gyeongmin;Jo, Jaechoon;Lim, Heuiseok
    • Journal of Digital Convergence
    • /
    • v.17 no.11
    • /
    • pp.233-240
    • /
    • 2019
  • In order to predict and respond to cyber attacks, a number of security companies quickly identify the methods, types and characteristics of attack techniques and are publishing Security Intelligence Reports(SIRs) on them. However, the SIRs distributed by each company are huge and unstructured. In this paper, we propose a framework that uses five analytic techniques to formulate a report and extract key information in order to reduce the time required to extract information on large unstructured SIRs efficiently. Since the SIRs data do not have the correct answer label, we propose four analysis techniques, Keyword Extraction, Topic Modeling, Summarization, and Document Similarity, through Unsupervised Learning. Finally, has built the data to extract threat information from SIRs, analysis applies to the Named Entity Recognition (NER) technology to recognize the words belonging to the IP, Domain/URL, Hash, Malware and determine if the word belongs to which type We propose a framework that applies a total of five analysis techniques, including technology.

Analysis on Research Trends in Sport Facilities: Focusing on SCOPUS DB (스포츠시설에 관한 연구 동향 분석: SCOPUS DB를 중심으로)

  • Kim, Il-Gwang;Park, Seong-Taek;Park, Su-Sun;Kim, Mi-Suk;Park, Jong-Chul;Jiang, Jialei
    • Journal of Industrial Convergence
    • /
    • v.19 no.6
    • /
    • pp.11-19
    • /
    • 2021
  • The purpose of this study is to explore trends in research at home and abroad related to "Sport Facilities", and seek the direction of further research. 1,801 abstracts of papers including "Sport Facilities" were collected from the SCOPUS DB from 2016 to 2020. Topic modeling techniques based on Latent Dirichlet Allocation (LDA) algorithm implemented in R language, TD-IDF techniques, and word cluds using Tagxedo was conducted to analyze the data. As a result, 8 topics were optimally determined, and "sports", "facilities", "health", "physical", "data", and "using" were derived as the main keywords for topics. This results indicated that studies on physical activity, health and using facilities regarding sports facilities at home and abroad have been actively carried out in recent years. This indicates that papers in SCOPUS DB are paying attention to the instrumental value of sport facilities, such as health promotion and improving the quality of life. Therefore, various studies that help participants who use sport facilities for a healthy life should be continuously conducted in the future.

Prediction of Correct Answer Rate and Identification of Significant Factors for CSAT English Test Based on Data Mining Techniques (데이터마이닝 기법을 활용한 대학수학능력시험 영어영역 정답률 예측 및 주요 요인 분석)

  • Park, Hee Jin;Jang, Kyoung Ye;Lee, Youn Ho;Kim, Woo Je;Kang, Pil Sung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.11
    • /
    • pp.509-520
    • /
    • 2015
  • College Scholastic Ability Test(CSAT) is a primary test to evaluate the study achievement of high-school students and used by most universities for admission decision in South Korea. Because its level of difficulty is a significant issue to both students and universities, the government makes a huge effort to have a consistent difficulty level every year. However, the actual levels of difficulty have significantly fluctuated, which causes many problems with university admission. In this paper, we build two types of data-driven prediction models to predict correct answer rate and to identify significant factors for CSAT English test through accumulated test data of CSAT, unlike traditional methods depending on experts' judgments. Initially, we derive candidate question-specific factors that can influence the correct answer rate, such as the position, EBS-relation, readability, from the annual CSAT practices and CSAT for 10 years. In addition, we drive context-specific factors by employing topic modeling which identify the underlying topics over the text. Then, the correct answer rate is predicted by multiple linear regression and level of difficulty is predicted by classification tree. The experimental results show that 90% of accuracy can be achieved by the level of difficulty (difficult/easy) classification model, whereas the error rate for correct answer rate is below 16%. Points and problem category are found to be critical to predict the correct answer rate. In addition, the correct answer rate is also influenced by some of the topics discovered by topic modeling. Based on our study, it will be possible to predict the range of expected correct answer rate for both question-level and entire test-level, which will help CSAT examiners to control the level of difficulties.

A Study on the Analysis of Related Information through the Establishment of the National Core Technology Network: Focused on Display Technology (국가핵심기술 관계망 구축을 통한 연관정보 분석연구: 디스플레이 기술을 중심으로)

  • Pak, Se Hee;Yoon, Won Seok;Chang, Hang Bae
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.2
    • /
    • pp.123-141
    • /
    • 2021
  • As the dependence of technology on the economic structure increases, the importance of National Core Technology is increasing. However, due to the nature of the technology itself, it is difficult to determine the scope of the technology to be protected because the scope of the relation is abstract and information disclosure is limited due to the nature of the National Core Technology. To solve this problem, we propose the most appropriate literature type and method of analysis to distinguish important technologies related to National Core Technology. We conducted a pilot test to apply TF-IDF, and LDA topic modeling, two techniques of text mining analysis for big data analysis, to four types of literature (news, papers, reports, patents) collected with National Core Technology keywords in the field of Display industry. As a result, applying LDA theme modeling to patent data are highly relevant to National Core Technology. Important technologies related to the front and rear industries of displays, including OLEDs and microLEDs, were identified, and the results were visualized as networks to clarify the scope of important technologies associated with National Core Technology. Throughout this study, we have clarified the ambiguity of the scope of association of technologies and overcome the limited information disclosure characteristics of national core technologies.

An Overview of Information Processing Techniques for Structural Health Monitoring of Bridges (교량 건전성 모니터링을 위한 정보처리기법)

  • Lee, Jong-Jae;Park, Young-Soo;Yun, Chung-Bang;Koo, Ki-Young;Yi, Jin-Hak
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.21 no.6
    • /
    • pp.615-632
    • /
    • 2008
  • The bridge health monitoring has become an important research topic in conjunction with damage assessment and safety evaluation of structures owing to the improvement of structural modeling techniques incorporating response measurements and the advancements in signal analysis and information processing capabilities. The bridge monitoring systems are generally composed of hardwares such as sensors, data acquisition equipment, data transmission systems, etc, and softwares such as signal processing, damage assessment, display and management, etc. In this paper, the research and development(R&D) activities on the information processing for structural health monitoring of bridges are reviewed. After a brief introduction to the process of bridge health monitoring, various information processing techniques including various signal processing and damage detection algorithms are introduced in detail. Several challenges addressing critical issues in the current bridge health monitoring system and future R&D activities are discussed.

Topic Automatic Extraction Model based on Unstructured Security Intelligence Report (비정형 보안 인텔리전스 보고서 기반 토픽 자동 추출 모델)

  • Hur, YunA;Lee, Chanhee;Kim, Gyeongmin;Lim, HeuiSeok
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.6
    • /
    • pp.33-39
    • /
    • 2019
  • As cyber attack methods are becoming more intelligent, incidents such as security breaches and international crimes are increasing. In order to predict and respond to these cyber attacks, the characteristics, methods, and types of attack techniques should be identified. To this end, many security companies are publishing security intelligence reports to quickly identify various attack patterns and prevent further damage. However, the reports that each company distributes are not structured, yet, the number of published intelligence reports are ever-increasing. In this paper, we propose a method to extract structured data from unstructured security intelligence reports. We also propose an automatic intelligence report analysis system that divides a large volume of reports into sub-groups based on their topics, making the report analysis process more effective and efficient.

Airline Service Quality Evaluation Based on Customer Review Using Machine Learning Approach and Sentiment Analysis (머신러닝과 감성분석을 활용한 고객 리뷰 기반 항공 서비스 품질 평가)

  • Jeon, Woojin;Lee, Yebin;Geum, Youngjung
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.4
    • /
    • pp.15-36
    • /
    • 2021
  • The airline industry faces with significant competition due to the rise of technology innovation and diversified customer needs. Therefore, continuous quality management is essential to gain competitive advantages. For this reason, there have been various studies to measure and manage service quality using customer reviews. However, previous studies have focused on measuring customer satisfaction only, neglecting systematic management between customer expectations and perception based on customer reviews. In response, this study suggests a framework to identify relevant criteria for service quality management, measure the importance, and assess the customer perception based on customer reviews. Machine learning techniques, topic models, and sentiment analysis are used for this study. This study can be used as an important strategic tool for evaluating service quality by identifying important factors for airline customer satisfaction while presenting a framework for identifying each airline's current service level.