• Title/Summary/Keyword: Web News

Search Result 247, Processing Time 0.032 seconds

A Study on the Relationships among SNS Characteristics, Satisfaction and User Acceptance

  • Ko, Changbae;Yoon, Jongsoo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.11
    • /
    • pp.143-150
    • /
    • 2015
  • Social network services can be defined as an individual web page which enables online, human-relationship building by collecting useful information and sharing it with specific or unspecific people. Recently, as the social network services(SNS) such as Twitter and Facebook have been paid attention in many fields of the society. SNSs are also one of the fastest channels to get news which people may not be able to see on TV or newspaper. The number of people who feel they are benefiting from social network services are increasing dramatically. A number of researches about SNS are underway. The study based on the Technology Acceptance Model empirically investigates the relationship between characteristics of SNS (system, service, information, and emotional) and user satisfaction of SNS. The study also analyzes how the relationshipa between SNS characteristics, satisfaction and user acceptance are moderated by country type of SNS users and inclination toward SNS acceptance. To achieve these research purposes, the study conducted various statistical analyses using questionnaire of the Korean and Chinese SNS users. The results of the study are followings. First, SNS characteristics have a positive effect to the user satisfaction. Second, SNS satisfaction have a positive effect to the user acceptance. Third, the relationship between SNS characteristics and user satisfaction is moderated by the country type of SNS users and inclination toward SNS acceptance. The study results could provide some implications to researchers who have interest in studying SNS, also could help business managers to operate and develop their SNS site more effectively.

A study on the User Experience at Unmanned Checkout Counter Using Big Data Analysis (빅데이터를 활용한 편의점 간편식에 대한 의미 분석)

  • Kim, Ae-sook;Ryu, Gi-hwan;Jung, Ju-hee;Kim, Hee-young
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.4
    • /
    • pp.375-380
    • /
    • 2022
  • The purpose of this study is to find out consumers' perception and meaning of convenience store convenience food by using big data. For this study, NNAVER and Daum analyzed news, intellectuals, blogs, cafes, intellectuals(tips), and web documents, and used 'convenience store convenience food' as keywords for data search. The data analysis period was selected as 3 years from January 1, 2019 to December 31, 2021. For data collection and analysis, frequency and matrix data were extracted using TEXTOM, and network analysis and visualization analysis were conducted using the NetDraw function of the UCINET 6 program. As a result, convenience store convenience foods were clustered into health, diversity, convenience, and economy according to consumers' selection attributes. It is expected to be the basis for the development of a new convenience menu that pursues convenience and convenience based on consumers' meaning of convenience store convenience foods such as appropriate prices, discount coupons, and events.

Training Techniques for Data Bias Problem on Deep Learning Text Summarization (딥러닝 텍스트 요약 모델의 데이터 편향 문제 해결을 위한 학습 기법)

  • Cho, Jun Hee;Oh, Hayoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.7
    • /
    • pp.949-955
    • /
    • 2022
  • Deep learning-based text summarization models are not free from datasets. For example, a summarization model trained with a news summarization dataset is not good at summarizing other types of texts such as internet posts and papers. In this study, we define this phenomenon as Data Bias Problem (DBP) and propose two training methods for solving it. The first is the 'proper nouns masking' that masks proper nouns. The second is the 'length variation' that randomly inflates or deflates the length of text. As a result, experiments show that our methods are efficient for solving DBP. In addition, we analyze the results of the experiments and present future development directions. Our contributions are as follows: (1) We discovered DBP and defined it for the first time. (2) We proposed two efficient training methods and conducted actual experiments. (3) Our methods can be applied to all summarization models and are easy to implement, so highly practical.

Change in Market Issues on HMR (Home Meal Replacements) Using Local Foods after the COVID-19 Outbreak: Text Mining of Online Big Data (코로나19 발생 후 지역농산물 이용 간편식에 대한 시장 이슈 변화: 온라인 빅데이터의 텍스트마이닝)

  • Yoojeong, Joo;Woojin, Byeon;Jihyun, Yoon
    • Journal of the Korean Society of Food Culture
    • /
    • v.38 no.1
    • /
    • pp.1-14
    • /
    • 2023
  • This study was conducted to explore the change in the market issues on HMR (Home Meal Replacements) using local foods after the COVID-19 outbreak. Online text data were collected from internet news, social media posts, and web documents before (from January 2016 to December 2019) and after (from January 2020 to November 2022) the COVID-19 outbreak. TF-IDF analysis showed that 'Trend', 'Market', 'Consumption', and 'Food service industry' were the major keywords before the COVID-19 outbreak, whereas 'Wanju-gun', 'Distribution', 'Development', and 'Meal-kit' were main keywords after the COVID-19 outbreak. The results of topic modeling analysis and categorization showed that after the COVID-19 outbreak, the 'Market' category included 'Non-face-to-face market' instead of 'Event,' and 'Delivery' instead of 'Distribution'. In the 'Product' category, 'Marketing' was included instead of 'Trend'. Additionally, in the 'Support' category, 'Start-up' and 'School food service' appeared as new topics after the COVID-19 outbreak. In conclusion, this study showed that meaningful change had occurred in market issues on HMR using local foods after the COVID-19 outbreak. Therefore, governments should take advantage of such market opportunity by implementing policy and programs to promote the development and marketing of HMR using local foods.

A Study of Integrated Press System Implementation for Traffic Information (교통정보 언론제공 연계시스템 구축에 관한 연구)

  • Chung, Sung-Hak;Park, Hoy-Ryong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.9
    • /
    • pp.147-156
    • /
    • 2009
  • The aim of this study is to propose an integrated press system design for traffic information service by multi-connecting traffic information services which are now being serviced by each different requirements for the press service and by providing advanced traveler information service which organically user oriented design such as traffic broadcast, news, journal, semantic web and also related traffic ontology as well as road traffic information. For the objective, the status of domestic and foreign traffic information supply system was analyzed and then the requirements by media were reviewed. Then, by analyzing the system implementation method and the implementation system the method of implementing such the system was suggested. The design method suggested in this study enabled the information users to utilize a variety of traffic information through intervening between the necessary and sufficient conditions of information users and information suppliers. Throughout the result of this study, for the users who used the integrated transport, the efficient space movement and the economic using value was improved. Providing the traffic information through the press media will become useful information to road drivels, and it is effected that the traffic volume will be dispersed and the traffic jam will be relieved owing to the supply of traffic information to the press.

Unraveling the Web of Health Misinformation: Exploring the Characteristics, Emotions, and Motivations of Misinformation During the COVID-19 Pandemic

  • Vinit Yadav;Yukti Dhadwal;Rubal Kanozia;Shri Ram Pandey;Ashok Kumar
    • Asian Journal for Public Opinion Research
    • /
    • v.12 no.1
    • /
    • pp.53-74
    • /
    • 2024
  • The proliferation of health misinformation gained momentum amidst the outbreak of the novel coronavirus disease 2019 (COVID-19). People stuck in their homes, without work pressure, regardless of health concerns towards personal, family, or peer groups, consistently demanded information. People became engaged with misinformation while attempting to find health information content. This study used the content analysis method and analyzed 1,154 misinformation stories from four prominent signatories of the International Fact-Checking Network during the pandemic. The study finds the five main categories of misinformation related to the COVID-19 pandemic. These are 1) the severity of the virus, 2) cure, prevention, and treatment, 3) myths and rumors about vaccines, 4) health authorities' guidelines, and 5) personal and social impacts. Various sub-categories supported the content characteristics of these categories. The study also analyzed the emotional valence of health misinformation. It was found that misinformation containing negative sentiments got higher engagement during the pandemic. Positive and neutral sentiment misinformation has less reach. Surprise, fear, and anger/aggressive emotions highly affected people during the pandemic; in general, people and social media users warning people to safeguard themselves from COVID-19 and creating a confusing state were found as the primary motivation behind the propagation of misinformation. The present study offers valuable perspectives on the mechanisms underlying the spread of health-related misinformation amidst the COVID-19 outbreak. It highlights the significance of discerning the accuracy of information and the feelings it conveys in minimizing the adverse effects on the well-being of public health.

A Study on the Purchasing Factors of Color Cosmetics Using Big Data: Focusing on Topic Modeling and Concor Analysis (빅데이터를 활용한 색조화장품의 구매 요인에 관한 연구: 토픽모델링과 Concor 분석을 중심으로)

  • Eun-Hee Lee;Seung- Hee Bae
    • Journal of the Korean Applied Science and Technology
    • /
    • v.40 no.4
    • /
    • pp.724-732
    • /
    • 2023
  • In this study, we tried to analyze the characteristics of color cosmetics information search and the major information of interest in the color cosmetics market after COVID-19 shown in the text mining analysis results by collecting data on online interest information of consumers in the color cosmetics market after COVID-19. In the empirical analysis, text mining was performed on all documents such as news, blogs, cafes, and web pages, including the word "color cosmetics". As a result of the analysis, online information searches for color cosmetics after COVID-19 were mainly focused on purchase information, information on skin and mask-related makeup methods, and major topics such as interest brands and event information. As a result, post-COVID-19 color cosmetics buyers will become more sensitive to purchase information such as product value, safety, price benefits, and store information through active online information search, so a response strategy is required.

Exploring the phenomenon of veganphobia in vegan food and vegan fashion (비건 음식과 비건 패션에서 나타난 비건포비아 현상에 대한 탐구)

  • Yeong-Hyeon Choi;Sangyung Lee
    • The Research Journal of the Costume Culture
    • /
    • v.32 no.3
    • /
    • pp.381-397
    • /
    • 2024
  • This study investigates the negative perceptions (veganphobia) held by consumers toward vegan diets and fashion and aims to foster a genuine acceptance of ethical veganism in consumption. The textual data web-crawled Korean online posts, including news articles, blogs, forums, and tweets, containing keywords such as "contradiction," "dilemma," "conflict," "issues," "vegan food" and "vegan fashion" from 2013 to 2021. Data analysis was conducted through text mining, network analysis, and clustering analysis using Python and NodeXL programs. The analysis revealed distinct negative perceptions regarding vegan food. Key issues included the perception of hypocrisy among vegetarians, associations with specific political leanings, conflicts between environmental and animal rights, and contradictions between views on companion animals and livestock. Regarding the vegan fashion industry, the eco-friendliness of material selection and design processes were seen as the pivotal factors shaping negative attitudes. Furthermore, the study identified a shared negative perception regarding vegan food and vegan fashion. This negativity was characterized by confusion and conflicts between animal and environmental rights, biased perceptions linked to specific political affiliations, perceived self-righteousness among vegetarians, and general discomfort toward them. These factors collectively contributed to a broader negative perception of vegan consumption. In conclusion, this study is significant in understanding the complex perceptions and attitudes that con- sumers hold toward vegan food and fashion. The insights gained from this research can aid in the design of more effective campaign strategies aimed at promoting vegan consumerism, ultimately contributing to a more widespread acceptance of ethical veganism in society.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.