• Title/Summary/Keyword: Web News

Search Result 247, Processing Time 0.024 seconds

A Quantitative Analysis of Classification Classes and Classified Information Resources of Directory (디렉터리 서비스 분류항목 및 정보자원의 계량적 분석)

  • Kim, Sung-Won
    • Journal of Information Management
    • /
    • v.37 no.1
    • /
    • pp.83-103
    • /
    • 2006
  • This study analyzes the classification schemes and classified information resources of the directory services provided by major web portals to complement keyword-based retrieval. Specifically, this study intends to quantitatively analyze the topic categories, the information resources by subject, and the information resources classified by the topic categories of three directories, Yahoo, Naver, and Empas. The result of this analysis reveals some differences among directory services. Overall, these directories show different ratios of referred categories to original categories depending on the subject area, and the categories regarded as format-based show the highest proportion of referred categories. In terms of the total amount of classified information resources, Yahoo has the largest number of resources. The directories compared have different amounts of resources depending on the subject area. The quantitative analysis of resources classified by the specific category is performed on the class of 'News & Media'. The result reveals that Naver and Empas contain overly specified categories compared to Yahoo, as far as the number of information resources categorized is concerned. Comparing the depth of the categories assigned by the three directories to the same information resources, it is found that, on average, Yahoo assigns one-step further segmented divisions than the other two directories to the identical resources.

A Study on Strategic Management of Native Advertisement (네이티브 광고의 전략적 관리방안에 관한 연구)

  • Son, Jeyoung;Kang, Inwon
    • Management & Information Systems Review
    • /
    • v.38 no.1
    • /
    • pp.63-81
    • /
    • 2019
  • In order to overcome the disadvantages of banner ad, pop-up ad, interstitial ad, which are existing web advertisement forms, native ad is actively utilized. Native advertising is considered to be a useful advertising technique in that it can reduce users' rejection and attract attention. However, in recent years, there have been a lot of fake news and fake contents that have turned articles or video contents into advertisements. The purpose of this study is to understand how firms can coordinate and control native advertisements in a rational way. For this analysis, we conducted a survey of 308 social media users using quota sampling method. As a result of the verification, it was found that the more negative the perception of the evaluation of the advertisement, the less the level of persuasion about the advertisement and the negative impact on the website where the advertisement is exposed. In addition, this study examined the influence of the negative stimulus factors on the qualitative performance of the firm. As a result, it was found that source non-expert had the highest effect on skepticism on ad. Also, platform overflow has a direct effect on the evaluation of the website as well as the negative evaluation of the advertisement. Moreover, this study provides concrete implications for the subdivision market by verifying the differences between the paths according to the level of website involvement.

A study on the User Experience at Unmanned Checkout Counter Using Big Data Analysis (빅데이터 분석을 통한 무인계산대 사용자 경험에 관한 연구)

  • Kim, Ae-sook;Jung, Sun-mi;Ryu, Gi-hwan;Kim, Hee-young
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.2
    • /
    • pp.343-348
    • /
    • 2022
  • This study aims to analyze the user experience of unmanned checkout counters perceived by consumers using SNS big data. For this study, blogs, news, intellectuals, cafes, intellectuals (tips), and web documents were analyzed on Naver and Daum, and 'unmanned checkpoints' were used as keywords for data search. The data analysis period was selected as two years from January 1, 2020 to December 31, 2021. For data collection and analysis, frequency and matrix data were extracted through Textom, and network analysis and visualization analysis were conducted using the NetDraw function of the UCINET 6 program. As a result, the perception of the checkout counter was clustered into accessibility, usability, continuous use intention, and others according to the definition of consumers' experience factors. From a supplier's point of view, if unmanned checkpoints spread indiscriminately to solve the problem of raising the minimum wage and shortening working hours, a bigger employment problem will arise from a social point of view. In addition, institutionalization is needed to supply easy and convenient unmanned checkout counters for the elderly and younger generations, children, and foreigners who are not familiar with unmanned calculation.

Introducing SEABOT: Methodological Quests in Southeast Asian Studies

  • Keck, Stephen
    • SUVANNABHUMI
    • /
    • v.10 no.2
    • /
    • pp.181-213
    • /
    • 2018
  • How to study Southeast Asia (SEA)? The need to explore and identify methodologies for studying SEA are inherent in its multifaceted subject matter. At a minimum, the region's rich cultural diversity inhibits both the articulation of decisive defining characteristics and the training of scholars who can write with confidence beyond their specialisms. Consequently, the challenges of understanding the region remain and a consensus regarding the most effective approaches to studying its history, identity and future seem quite unlikely. Furthermore, "Area Studies" more generally, has proved to be a less attractive frame of reference for burgeoning scholarly trends. This paper will propose a new tool to help address these challenges. Even though the science of artificial intelligence (AI) is in its infancy, it has already yielded new approaches to many commercial, scientific and humanistic questions. At this point, AI has been used to produce news, generate better smart phones, deliver more entertainment choices, analyze earthquakes and write fiction. The time has come to explore the possibility that AI can be put at the service of the study of SEA. The paper intends to lay out what would be required to develop SEABOT. This instrument might exist as a robot on the web which might be called upon to make the study of SEA both broader and more comprehensive. The discussion will explore the financial resources, ownership and timeline needed to make SEABOT go from an idea to a reality. SEABOT would draw upon artificial neural networks (ANNs) to mine the region's "Big Data", while synthesizing the information to form new and useful perspectives on SEA. Overcoming significant language issues, applying multidisciplinary methods and drawing upon new yields of information should produce new questions and ways to conceptualize SEA. SEABOT could lead to findings which might not otherwise be achieved. SEABOT's work might well produce outcomes which could open up solutions to immediate regional problems, provide ASEAN planners with new resources and make it possible to eventually define and capitalize on SEA's "soft power". That is, new findings should provide the basis for ASEAN diplomats and policy-makers to develop new modalities of cultural diplomacy and improved governance. Last, SEABOT might also open up avenues to tell the SEA story in new distinctive ways. SEABOT is seen as a heuristic device to explore the results which this instrument might yield. More important the discussion will also raise the possibility that an AI-driven perspective on SEA may prove to be even more problematic than it is beneficial.

  • PDF

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

An Analysis of IT Trends Using Tweet Data (트윗 데이터를 활용한 IT 트렌드 분석)

  • Yi, Jin Baek;Lee, Choong Kwon;Cha, Kyung Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.143-159
    • /
    • 2015
  • Predicting IT trends has been a long and important subject for information systems research. IT trend prediction makes it possible to acknowledge emerging eras of innovation and allocate budgets to prepare against rapidly changing technological trends. Towards the end of each year, various domestic and global organizations predict and announce IT trends for the following year. For example, Gartner Predicts 10 top IT trend during the next year, and these predictions affect IT and industry leaders and organization's basic assumptions about technology and the future of IT, but the accuracy of these reports are difficult to verify. Social media data can be useful tool to verify the accuracy. As social media services have gained in popularity, it is used in a variety of ways, from posting about personal daily life to keeping up to date with news and trends. In the recent years, rates of social media activity in Korea have reached unprecedented levels. Hundreds of millions of users now participate in online social networks and communicate with colleague and friends their opinions and thoughts. In particular, Twitter is currently the major micro blog service, it has an important function named 'tweets' which is to report their current thoughts and actions, comments on news and engage in discussions. For an analysis on IT trends, we chose Tweet data because not only it produces massive unstructured textual data in real time but also it serves as an influential channel for opinion leading on technology. Previous studies found that the tweet data provides useful information and detects the trend of society effectively, these studies also identifies that Twitter can track the issue faster than the other media, newspapers. Therefore, this study investigates how frequently the predicted IT trends for the following year announced by public organizations are mentioned on social network services like Twitter. IT trend predictions for 2013, announced near the end of 2012 from two domestic organizations, the National IT Industry Promotion Agency (NIPA) and the National Information Society Agency (NIA), were used as a basis for this research. The present study analyzes the Twitter data generated from Seoul (Korea) compared with the predictions of the two organizations to analyze the differences. Thus, Twitter data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. To overcome these challenges, we used SAS IRS (Information Retrieval Studio) developed by SAS to capture the trend in real-time processing big stream datasets of Twitter. The system offers a framework for crawling, normalizing, analyzing, indexing and searching tweet data. As a result, we have crawled the entire Twitter sphere in Seoul area and obtained 21,589 tweets in 2013 to review how frequently the IT trend topics announced by the two organizations were mentioned by the people in Seoul. The results shows that most IT trend predicted by NIPA and NIA were all frequently mentioned in Twitter except some topics such as 'new types of security threat', 'green IT', 'next generation semiconductor' since these topics non generalized compound words so they can be mentioned in Twitter with other words. To answer whether the IT trend tweets from Korea is related to the following year's IT trends in real world, we compared Twitter's trending topics with those in Nara Market, Korea's online e-Procurement system which is a nationwide web-based procurement system, dealing with whole procurement process of all public organizations in Korea. The correlation analysis show that Tweet frequencies on IT trending topics predicted by NIPA and NIA are significantly correlated with frequencies on IT topics mentioned in project announcements by Nara market in 2012 and 2013. The main contribution of our research can be found in the following aspects: i) the IT topic predictions announced by NIPA and NIA can provide an effective guideline to IT professionals and researchers in Korea who are looking for verified IT topic trends in the following topic, ii) researchers can use Twitter to get some useful ideas to detect and predict dynamic trends of technological and social issues.

The development of resources for the application of 2020 Dietary Reference Intakes for Koreans (2020 한국인 영양소 섭취기준 활용 자료 개발)

  • Hwang, Ji-Yun;Kim, Yangha;Lee, Haeng Shin;Park, EunJu;Kim, Jeongseon;Shin, Sangah;Kim, Ki Nam;Bae, Yun Jung;Kim, Kirang;Woo, Taejung;Yoon, Mi Ock;Lee, Myoungsook
    • Journal of Nutrition and Health
    • /
    • v.55 no.1
    • /
    • pp.21-35
    • /
    • 2022
  • The recommended meal composition allows the general people to organize meals using the number of intakes of foods from each of six food groups (grains, meat·fish·eggs·beans, vegetables, fruits, milk·dairy products and oils·sugars) to meet Dietary Reference Intakes for Koreans (KDRIs) without calculating complex nutritional values. Through an integrated analysis of data from the 6th to 7th Korean National Health and Nutrition Examination Surveys (2013-2018), representative foods for each food group were selected, and the amounts of representative foods per person were derived based on energy. Based on the EER by age and gender from the KDRIs, a total of 12 kinds of diets were suggested by differentiating meal compositions by age (aged 1-2, 3-5, 6-11, 12-18, 19-64, 65-74 and ≥ 75 years) and gender. The 2020 Food Balance Wheel included the 6th food group of oils and sugars to raise public awareness and avoid confusion in the practical utilization of the model by industries or individuals in reducing the consistent increasing intakes of oils and sugars. To promote the everyday use of the Food Balance Wheel and recommended meal compositions among the general public, the poster of the Food Balance Wheel was created in five languages (Korean, English, Japanese, Vietnamese and Chinese) along with card news. A survey was conducted to provide a basis for categorizing nutritional problems by life cycles and developing customized web-based messages to the public. Based on survey results two types of card news were produced for the general public and youth. Additionally, the educational program was developed through a series of processes, such as prioritization of educational topics, setting educational goals for each stage, creation of a detailed educational system chart and teaching-learning plans for the development of educational materials and media.

An Analysis for Deriving New Convergent Service of Mobile Learning: The Case of Social Network Analysis and Association Rule (모바일 러닝에서의 신규 융합서비스 도출을 위한 분석: 사회연결망 분석과 연관성 분석 사례)

  • Baek, Heon;Kim, Jin Hwa;Kim, Yong Jin
    • Information Systems Review
    • /
    • v.15 no.3
    • /
    • pp.1-37
    • /
    • 2013
  • This study is conducted to explore the possibility of service convergence to promote mobile learning. This study has attempted to identify how mobile learning service is provided, which services among them are considered most popular, and which services are highly demanded by users. This study has also investigated the potential opportunities for service convergence of mobile service and e-learning. This research is then extended to examine the possibility of active convergence of common services in mobile services and e-learning. Important variables have been identified from related web pages of portal sites using social network analysis (SNA) and association rules. Due to the differences in number and type of variables on different web pages, SNA was used to deal with the difficulties of identifying the degree of complex connection. Association analysis has been used to identify association rules among variables. The study has revealed that most frequent services among common services of mobile services and e-learning were Games and SNS followed by Payment, Advertising, Mail, Event, Animation, Cloud, e-Book, Augmented Reality and Jobs. This study has also found that Search, News, GPS in mobile services were turned out to be very highly demanded while Simulation, Culture, Public Education were highly demanded in e-learning. In addition, It has been found that variables involving with high service convergence based on common variables of mobile and e-learning services were Games and SNS, Games and Sports, SNS and Advertising, Games and Event, SNS and e-Book, Games and Community in mobile services while Games, Animation, Counseling, e-Book, being preceding services Simulation, Speaking, Public Education, Attendance Management were turned out be highly convergent in e-learning services. Finally, this study has attempted to predict possibility of active service convergence focusing on Games, SNS, e-Book which were highly demanded common services in mobile and e-learning services. It is expected that this study can be used to suggest a strategic direction to promote mobile learning by converging mobile services and e-learning.

  • PDF

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.