• Title/Summary/Keyword: news text

Search Result 379, Processing Time 0.024 seconds

A Comparative Study of Text analysis and Network embedding Methods for Effective Fake News Detection (효과적인 가짜 뉴스 탐지를 위한 텍스트 분석과 네트워크 임베딩 방법의 비교 연구)

  • Park, Sung Soo;Lee, Kun Chang
    • Journal of Digital Convergence
    • /
    • v.17 no.5
    • /
    • pp.137-143
    • /
    • 2019
  • Fake news is a form of misinformation that has the advantage of rapid spreading of information on media platforms that users interact with, such as social media. There has been a lot of social problems due to the recent increase in fake news. In this paper, we propose a method to detect such false news. Previous research on fake news detection mainly focused on text analysis. This research focuses on a network where social media news spreads, generates qualities with DeepWalk, a network embedding method, and classifies fake news using logistic regression analysis. We conducted an experiment on fake news detection using 211 news on the Internet and 1.2 million news diffusion network data. The results show that the accuracy of false network detection using network embedding is 10.6% higher than that of text analysis. In addition, fake news detection, which combines text analysis and network embedding, does not show an increase in accuracy over network embedding. The results of this study can be effectively applied to the detection of fake news that organizations spread online.

Major concerns regarding food services based on news media reports during the COVID-19 outbreak using the topic modeling approach

  • Yoon, Hyejin;Kim, Taejin;Kim, Chang-Sik;Kim, Namgyu
    • Nutrition Research and Practice
    • /
    • v.15 no.sup1
    • /
    • pp.110-121
    • /
    • 2021
  • BACKGROUND/OBJECTIVES: Coronavirus disease 2019 (COVID-19) cases were first reported in December 2019, in China, and an increasing number of cases have since been detected all over the world. The purpose of this study was to collect significant news media reports on food services during the COVID-19 crisis and identify public communication and significant concerns regarding COVID-19 for suggesting future directions for the food industry and services. SUBJECTS/METHODS: News articles pertaining to food services were extracted from the home pages of major news media websites such as BBC, CNN, and Fox News between March 2020 and February 2021. The retrieved data was sorted and analyzed using Python software. RESULTS: The results of text analytics were presented in the format of the topic label and category for individual topics. The food and health category presented the effects of the COVID-19 pandemic on food and health, such as an increase in delivery services. The policy category was indicative of a change in government policy. The lifestyle change category addressed topics such as an increase in social media usage. CONCLUSIONS: This study is the first to analyze major news media (i.e., BBC, CNN, and Fox News) data related to food services in the context of the COVID-19 pandemic. Text analytics research on the food services domain revealed different categories such as food and health, policy, and lifestyle change. Therefore, this study contributes to the body of knowledge on food services research, through the use of text analytics to elicit findings from media sources.

Policy agenda proposals from text mining analysis of patents and news articles (특허 및 뉴스 기사 텍스트 마이닝을 활용한 정책의제 제안)

  • Lee, Sae-Mi;Hong, Soon-Goo
    • Journal of Digital Convergence
    • /
    • v.18 no.3
    • /
    • pp.1-12
    • /
    • 2020
  • The purpose of this study is to explore the trend of blockchain technology through analysis of patents and news articles using text mining, and to suggest the blockchain policy agenda by grasping social interests. For this purpose, 327 blockchain-related patent abstracts in Korea and 5,941 full-text online news articles were collected and preprocessed. 12 patent topics and 19 news topics were extracted with latent dirichlet allocation topic modeling. Analysis of patents showed that topics related to authentication and transaction accounted were largely predominant. Analysis of news articles showed that social interests are mainly concerned with cryptocurrency. Policy agendas were then derived for blockchain development. This study demonstrates the efficient and objective use of an automated technique for the analysis of large text documents. Additionally, specific policy agendas are proposed in this study which can inform future policy-making processes.

A Study on Effective Sentiment Analysis through News Classification in Bankruptcy Prediction Model (부도예측 모형에서 뉴스 분류를 통한 효과적인 감성분석에 관한 연구)

  • Kim, Chansong;Shin, Minsoo
    • Journal of Information Technology Services
    • /
    • v.18 no.1
    • /
    • pp.187-200
    • /
    • 2019
  • Bankruptcy prediction model is an issue that has consistently interested in various fields. Recently, as technology for dealing with unstructured data has been developed, researches applied to business model prediction through text mining have been activated, and studies using this method are also increasing in bankruptcy prediction. Especially, it is actively trying to improve bankruptcy prediction by analyzing news data dealing with the external environment of the corporation. However, there has been a lack of study on which news is effective in bankruptcy prediction in real-time mass-produced news. The purpose of this study was to evaluate the high impact news on bankruptcy prediction. Therefore, we classify news according to type, collection period, and analyzed the impact on bankruptcy prediction based on sentiment analysis. As a result, artificial neural network was most effective among the algorithms used, and commentary news type was most effective in bankruptcy prediction. Column and straight type news were also significant, but photo type news was not significant. In the news by collection period, news for 4 months before the bankruptcy was most effective in bankruptcy prediction. In this study, we propose a news classification methods for sentiment analysis that is effective for bankruptcy prediction model.

Keyword Extraction from News Corpus using Modified TF-IDF (TF-IDF의 변형을 이용한 전자뉴스에서의 키워드 추출 기법)

  • Lee, Sung-Jick;Kim, Han-Joon
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.59-73
    • /
    • 2009
  • Keyword extraction is an important and essential technique for text mining applications such as information retrieval, text categorization, summarization and topic detection. A set of keywords extracted from a large-scale electronic document data are used for significant features for text mining algorithms and they contribute to improve the performance of document browsing, topic detection, and automated text classification. This paper presents a keyword extraction technique that can be used to detect topics for each news domain from a large document collection of internet news portal sites. Basically, we have used six variants of traditional TF-IDF weighting model. On top of the TF-IDF model, we propose a word filtering technique called 'cross-domain comparison filtering'. To prove effectiveness of our method, we have analyzed usefulness of keywords extracted from Korean news articles and have presented changes of the keywords over time of each news domain.

  • PDF

Grammatical Structure Oriented Automated Approach for Surface Knowledge Extraction from Open Domain Unstructured Text

  • Tissera, Muditha;Weerasinghe, Ruvan
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.2
    • /
    • pp.113-124
    • /
    • 2022
  • News in the form of web data generates increasingly large amounts of information as unstructured text. The capability of understanding the meaning of news is limited to humans; thus, it causes information overload. This hinders the effective use of embedded knowledge in such texts. Therefore, Automatic Knowledge Extraction (AKE) has now become an integral part of Semantic web and Natural Language Processing (NLP). Although recent literature shows that AKE has progressed, the results are still behind the expectations. This study proposes a method to auto-extract surface knowledge from English news into a machine-interpretable semantic format (triple). The proposed technique was designed using the grammatical structure of the sentence, and 11 original rules were discovered. The initial experiment extracted triples from the Sri Lankan news corpus, of which 83.5% were meaningful. The experiment was extended to the British Broadcasting Corporation (BBC) news dataset to prove its generic nature. This demonstrated a higher meaningful triple extraction rate of 92.6%. These results were validated using the inter-rater agreement method, which guaranteed the high reliability.

A Study of Main Contents Extraction from Web News Pages based on XPath Analysis

  • Sun, Bok-Keun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.7
    • /
    • pp.1-7
    • /
    • 2015
  • Although data on the internet can be used in various fields such as source of data of IR(Information Retrieval), Data mining and knowledge information servece, and contains a lot of unnecessary information. The removal of the unnecessary data is a problem to be solved prior to the study of the knowledge-based information service that is based on the data of the web page, in this paper, we solve the problem through the implementation of XTractor(XPath Extractor). Since XPath is used to navigate the attribute data and the data elements in the XML document, the XPath analysis to be carried out through the XTractor. XTractor Extracts main text by html parsing, XPath grouping and detecting the XPath contains the main data. The result, the recognition and precision rate are showed in 97.9%, 93.9%, except for a few cases in a large amount of experimental data and it was confirmed that it is possible to properly extract the main text of the news.

Text Mining-based Fake News Detection Using News And Social Media Data (뉴스와 소셜 데이터를 활용한 텍스트 기반 가짜 뉴스 탐지 방법론)

  • Hyun, Yoonjin;Kim, Namgyu
    • The Journal of Society for e-Business Studies
    • /
    • v.23 no.4
    • /
    • pp.19-39
    • /
    • 2018
  • Recently, fake news has attracted worldwide attentions regardless of the fields. The Hyundai Research Institute estimated that the amount of fake news damage reached about 30.9 trillion won per year. The government is making efforts to develop artificial intelligence source technology to detect fake news such as holding "artificial intelligence R&D challenge" competition on the title of "searching for fake news." Fact checking services are also being provided in various private sector fields. Nevertheless, in academic fields, there are also many attempts have been conducted in detecting the fake news. Typically, there are different attempts in detecting fake news such as expert-based, collective intelligence-based, artificial intelligence-based, and semantic-based. However, the more accurate the fake news manipulation is, the more difficult it is to identify the authenticity of the news by analyzing the news itself. Furthermore, the accuracy of most fake news detection models tends to be overestimated. Therefore, in this study, we first propose a method to secure the fairness of false news detection model accuracy. Secondly, we propose a method to identify the authenticity of the news using the social data broadly generated by the reaction to the news as well as the contents of the news.

Machine Learning Method in Medical Education: Focusing on Research Case of Press Frame on Asbestos (의학교육에서 기계학습방법 교육: 석면 언론 프레임 연구사례를 중심으로)

  • Kim, Junhewk;Heo, So-Yun;Kang, Shin-Ik;Kim, Geon-Il;Kang, Dongmug
    • Korean Medical Education Review
    • /
    • v.19 no.3
    • /
    • pp.158-168
    • /
    • 2017
  • There is a more urgent call for educational methods of machine learning in medical education, and therefore, new approaches of teaching and researching machine learning in medicine are needed. This paper presents a case using machine learning through text analysis. Topic modeling of news articles with the keyword 'asbestos' were examined. Two hypotheses were tested using this method, and the process of machine learning of texts is illustrated through this example. Using an automated text analysis method, all the news articles published from January 1, 1990 to November 15, 2016 in South Korea which included 'asbestos' in the title and the body were collected by web scraping. Differences in topics were analyzed by structured topic modelling (STM) and compared by press companies and periods. More articles were found in liberal media outlets. Differences were found in the number and types of topics in the articles according to the partisanship and period. STM showed that the conservative press views asbestos as a personal problem, while the progressive press views asbestos as a social problem. A divergence in the perspective for emphasizing the issues of asbestos between the conservative press and progressive press was also found. Social perspective influences the main topics of news stories. Thus, the patients' uneasiness and pain are not presented by both sources of media. In addition, topics differ between news media sources based on partisanship, and therefore cause divergence in readers' framing. The method of text analysis and its strengths and weaknesses are explained, and an application for the teaching and researching of machine learning in medical education using the methodology of text analysis is considered. An educational method of machine learning in medical education is urgent for future generations.

Applying Text Mining to Identify Factors Which Affect Likes and Dislikes of Online News Comments (텍스트마이닝을 통한 댓글의 공감도 및 비공감도에 영향을 미치는 댓글의 특성 연구)

  • Kim, Jeonghun;Song, Yeongeun;Jin, Yunseon;kwon, Ohbyung
    • Journal of Information Technology Services
    • /
    • v.14 no.2
    • /
    • pp.159-176
    • /
    • 2015
  • As a public medium and one of the big data sources that is accumulated informally and real time, online news comments or replies are considered a significant resource to understand mentalities of article readers. The comments are also being regarded as an important medium of WOM (Word of Mouse) about products, services or the enterprises. If the diffusing effect of the comments is referred to as the degrees of agreement and disagreement from an angle of WOM, figuring out which characteristics of the comments would influence the agreements or the disagreements to the comments in very early stage would be very worthwhile to establish a comment-based eWOM (electronic WOM) strategy. However, investigating the effects of the characteristics of the comments on eWOM effect has been rarely studied. According to this angle, this study aims to conduct an empirical analysis which understands the characteristics of comments that affect the numbers of agreement and disagreement, as eWOM performance, to particular news articles which address a specific product, service or enterprise per se. While extant literature has focused on the quantitative attributes of the comments which are collected by manually, this paper used text mining techniques to acquire the qualitative attributes of the comments in an automatic and cost effective manner.