• Title/Summary/Keyword: News Article Analysis

Search Result 117, Processing Time 0.028 seconds

A study on trends and predictions through analysis of linkage analysis based on big data between autonomous driving and spatial information (자율주행과 공간정보의 빅데이터 기반 연계성 분석을 통한 동향 및 예측에 관한 연구)

  • Cho, Kuk;Lee, Jong-Min;Kim, Jong Seo;Min, Guy Sik
    • Journal of Cadastre & Land InformatiX
    • /
    • v.50 no.2
    • /
    • pp.101-115
    • /
    • 2020
  • In this paper, big data analysis method was used to find out global trends in autonomous driving and to derive activate spatial information services. The applied big data was used in conjunction with news articles and patent document in order to analysis trend in news article and patents document data in spatial information. In this paper, big data was created and key words were extracted by using LDA (Latent Dirichlet Allocation) based on the topic model in major news on autonomous driving. In addition, Analysis of spatial information and connectivity, global technology trend analysis, and trend analysis and prediction in the spatial information field were conducted by using WordNet applied based on key words of patent information. This paper was proposed a big data analysis method for predicting a trend and future through the analysis of the connection between the autonomous driving field and spatial information. In future, as a global trend of spatial information in autonomous driving, platform alliances, business partnerships, mergers and acquisitions, joint venture establishment, standardization and technology development were derived through big data analysis.

Occupational Therapy in Long-Term Care Insurance For the Elderly Using Text Mining (텍스트 마이닝을 활용한 노인장기요양보험에서의 작업치료: 2007-2018년)

  • Cho, Min Seok;Baek, Soon Hyung;Park, Eom-Ji;Park, Soo Hee
    • Journal of Society of Occupational Therapy for the Aged and Dementia
    • /
    • v.12 no.2
    • /
    • pp.67-74
    • /
    • 2018
  • Objective : The purpose of this study is to quantitatively analyze the role of occupational therapy in long - term care insurance for the elderly using text mining, one of the big data analysis techniques. Method : For the analysis of newspaper articles, "Long - Term Care Insurance for the Elderly + Occupational Therapy for the Elderly" was collected after the period from 2007 to 208. Naver, which has a high share of the domestic search engine, utilized the database of Naver News by utilizing Textom, a web crawling tool. After collecting the article title and original text of 510 news data from the collection of the elderly long term care insurance + occupational therapy search, we analyzed the article frequency and key words by year. Result : In terms of the frequency of articles published by year, the number of articles published in 2015 and 2017 was the highest with 70 articles (13.7%), and the top 10 terms of the key word analysis showed the highest frequency of 'dementia' (344) In terms of key words, dementia, treatment, hospital, health, service, rehabilitation, facilities, institution, grade, elderly, professional, salary, industrial complex and people are related. Conclusion : In this study, it is meaningful that the textual mining technique was used to more objectively confirm the social needs and the role of the occupational therapist for the dementia and rehabilitation in the related key keywords based on the media reporting trend of the elderly long - term care insurance for 11 years. Based on the results of this study, future research should expand research field and period and supplement the research methodology through various analysis methods according to the year.

Linking Findings from Text Analyses to Online Sales Strategies (온라인상의 기업 및 소비자 텍스트 분석과 이를 활용한 온라인 매출 증진 전략)

  • Kim, Jeeyeon;Jo, Wooyong;Choi, Jeonghye;Chung, Yerim
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.41 no.2
    • /
    • pp.81-100
    • /
    • 2016
  • Much effort has been exerted to analyze online texts and understand how empirical results can help improve sales performance. In this research, we aim to extend this stream of research by decomposing online texts based on text sources, namely, companies and consumers. To be specific, we investigate how online texts driven by companies differ from those generated by consumers, and the extent to which both types of online texts have different effects on online sales. We obtained sales data from one of the biggest game publishers and merged them with online texts provided by companies using news articles and those created by consumers in user communities. The empirical analyses yield the following findings. Word visualization and topic analyses show that firms and consumers generate different contexts. Specifically, companies spread word to promote their own events whereas consumers produce online words to share winning strategies. Moreover, online sales are influenced by consumer-generated community topics whereas firm-driven topics in news articles have little to no effect. These findings suggest that companies should focus more on online texts generated by consumers rather than spreading their own words. Moreover, online sales strategies should take advantage of specific topics that have been proven to increase online sales. In particular, these findings give startup companies and small business owners in variety of industries the advantage when they use the online channel for distribution and as a marketing platform.

A Study on Automatic Classification of Newspaper Articles Based on Unsupervised Learning by Departments (비지도학습 기반의 행정부서별 신문기사 자동분류 연구)

  • Kim, Hyun-Jong;Ryu, Seung-Eui;Lee, Chul-Ho;Nam, Kwang Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.9
    • /
    • pp.345-351
    • /
    • 2020
  • Administrative agencies today are paying keen attention to big data analysis to improve their policy responsiveness. Of all the big data, news articles can be used to understand public opinion regarding policy and policy issues. The amount of news output has increased rapidly because of the emergence of new online media outlets, which calls for the use of automated bots or automatic document classification tools. There are, however, limits to the automatic collection of news articles related to specific agencies or departments based on the existing news article categories and keyword search queries. Thus, this paper proposes a method to process articles using classification glossaries that take into account each agency's different work features. To this end, classification glossaries were developed by extracting the work features of different departments using Word2Vec and topic modeling techniques from news articles related to different agencies. As a result, the automatic classification of newspaper articles for each department yielded approximately 71% accuracy. This study is meaningful in making academic and practical contributions because it presents a method of extracting the work features for each department, and it is an unsupervised learning-based automatic classification method for automatically classifying news articles relevant to each agency.

Research Analysis in Automatic Fake News Detection (자동화기반의 가짜 뉴스 탐지를 위한 연구 분석)

  • Jwa, Hee-Jung;Oh, Dong-Suk;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.7
    • /
    • pp.15-21
    • /
    • 2019
  • Research in detecting fake information gained a lot of interest after the US presidential election in 2016. Information from unknown sources are produced in the shape of news, and its rapid spread is fueled by the interest of public drawn to stimulating and interesting issues. In addition, the wide use of mass communication platforms such as social network services makes this phenomenon worse. Poynter Institute created the International Fact Checking Network (IFCN) to provide guidelines for judging the facts of skilled professionals and releasing "Code of Ethics" for fact check agencies. However, this type of approach is costly because of the large number of experts required to test authenticity of each article. Therefore, research in automated fake news detection technology that can efficiently identify it is gaining more attention. In this paper, we investigate fake news detection systems and researches that are rapidly developing, mainly thanks to recent advances in deep learning technology. In addition, we also organize shared tasks and training corpus that are released in various forms, so that researchers can easily participate in this field, which deserves a lot of research effort.

Forecasting the Future Korean Society: A Big Data Analysis on 'Future Society'-related Keywords in News Articles and Academic Papers (빅데이터를 통해 본 한국사회의 미래: 언론사 뉴스기사와 사회과학 학술논문의 '미래사회' 관련 키워드 분석)

  • Kim, Mun-Cho;Lee, Wang-Won;Lee, Hye-Soo;Suh, Byung-Jo
    • Informatization Policy
    • /
    • v.25 no.4
    • /
    • pp.37-64
    • /
    • 2018
  • This study aims to forecast the future of the Korean society via a big data analysis. Based upon two sets of database - a collection of 46,000,000 news on 127 media in Naver Portal operated by Naver Corporation and a collection of 70,000 academic papers of social sciences registered in KCI (Korea Citation Index of National Research Foundation) between 2005-2017, 40 most frequently occurring keywords were selected. Next, their temporal variations were traced and compared in terms of number and pattern of frequencies. In addition, core issues of the future were identified through keyword network analysis. In the case of the media news database, such issues as economy, polity or technology turned out to be the top ranked ones. As to the academic paper database, however, top ranking issues are those of feeling, working or living. Referring to the system and life-world conceptual framework suggested by $J{\ddot{u}}rgen$ Habermas, public interest of the future inclines to the matter of 'system' while professional interest of the future leans to that of 'life-world.' Given the disparity of future interest, a 'mismatch paradigm' is proposed as an alternative to social forecasting, which can substitute the existing paradigms based on the ideas of deficiency or deprivation.

Critical Discourse Analysis of '5.18' in 'Honam' and 'Yeongnam' Local Newspapers by Using Corpus (코퍼스를 이용한 '호남'과 '영남' 지역신문에서의 '5.18'에 대한 비판적 담화분석)

  • Lee, Sukeui;Jin, Duhyeon
    • Korean Linguistics
    • /
    • v.76
    • /
    • pp.83-112
    • /
    • 2017
  • In this paper, newspaper articles were collected through '5.18' keyword search results and the news corpus was constructed from the collected data. In the articles of local newspapers 'Honam' and 'Yeongnam', the ideological differences regarding '5.18' were investigated. The ideological differences of local newspaper discourse through objective figures was analyzed.. The subjects of the newspaper articles, the frequency of nouns and predicates were analyzed. The use and meaning of the intended vocabulary were examined. As a result of analyzing the title of the newspaper article, the discourse written in 'Honam' emphasized the necessity of re - recognition of 5.18. In both regions, the word "Gwangju" is often used. However, 'Gwangju' in 'Honam' newspaper means spiritual space, not physical space. In Honam regional newspapers, there are many vocabularies describing the events such as 'shoot' and 'fire', this calls for recollection and memory of '5.18'. In the analysis of newspaper discourse, the analysis of the contrast between the local newspapers was very insignificant, but, this study was conducted to analyze the discourse among local newspapers.

Method of Extracting the Topic Sentence Considering Sentence Importance based on ELMo Embedding (ELMo 임베딩 기반 문장 중요도를 고려한 중심 문장 추출 방법)

  • Kim, Eun Hee;Lim, Myung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.39-46
    • /
    • 2021
  • This study is about a method of extracting a summary from a news article in consideration of the importance of each sentence constituting the article. We propose a method of calculating sentence importance by extracting the probabilities of topic sentence, similarity with article title and other sentences, and sentence position as characteristics that affect sentence importance. At this time, a hypothesis is established that the Topic Sentence will have a characteristic distinct from the general sentence, and a deep learning-based classification model is trained to obtain a topic sentence probability value for the input sentence. Also, using the pre-learned ELMo language model, the similarity between sentences is calculated based on the sentence vector value reflecting the context information and extracted as sentence characteristics. The topic sentence classification performance of the LSTM and BERT models was 93% accurate, 96.22% recall, and 89.5% precision, resulting in high analysis results. As a result of calculating the importance of each sentence by combining the extracted sentence characteristics, it was confirmed that the performance of extracting the topic sentence was improved by about 10% compared to the existing TextRank algorithm.

News Article Big Data Analysis based on Machine Learning in Distributed Processing Environments (분산 처리 환경에서의 기계학습 기반의 뉴스 기사 빅 데이터 분석)

  • Oh, Hee-bin;Lee, Jeong-cheol;Kim, Kyungsup
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.59-62
    • /
    • 2017
  • 본 논문에서는 텍스트 형태의 빅 데이터를 분산처리 환경에서 기계학습을 이용하여 분석하고 유의미한 데이터를 만들어내는 시스템에 대해 다루었다. 빅 데이터의 한 종류인 뉴스 기사 빅 데이터를 분산 시스템 환경(Spark) 내에서 기계 학습(Word2Vec)을 이용하여 뉴스 기사의 키워드 간의 연관도를 분석하는 분산 처리 시스템을 설계 및 구현하였고, 사용자가 입력한 검색어와 연관된 키워드들을 한눈에 파악하기 쉽게 만드는 시각화 시스템을 설계하였다.

Political Opinion Mining from Article Comments using Deep Learning

  • Sung, Dae-Kyung;Jeong, Young-Seob
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.1
    • /
    • pp.9-15
    • /
    • 2018
  • Policy polls, which investigate the degree of support that the policy has for policy implementation, play an important role in making decisions. As the number of Internet users increases, the public is actively commenting on their policy news stories. Current policy polls tend to rely heavily on phone and offline surveys. Collecting and analyzing policy articles is useful in policy surveys. In this study, we propose a method of analyzing comments using deep learning technology showing outstanding performance in various fields. In particular, we designed various models based on the recurrent neural network (RNN) which is suitable for sequential data and compared the performance with the support vector machine (SVM), which is a traditional machine learning model. For all test sets, the SVM model show an accuracy of 0.73 and the RNN model have an accuracy of 0.83.