• Title/Summary/Keyword: mobile media

Search Result 1,423, Processing Time 0.022 seconds

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

An Analysis of the Comparative Importance of Systematic Attributes for Developing an Intelligent Online News Recommendation System: Focusing on the PWYW Payment Model (지능형 온라인 뉴스 추천시스템 개발을 위한 체계적 속성간 상대적 중요성 분석: PWYW 지불모델을 중심으로)

  • Lee, Hyoung-Joo;Chung, Nuree;Yang, Sung-Byung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.75-100
    • /
    • 2018
  • Mobile devices have become an important channel for news content usage in our daily life. However, online news content readers' resistance to online news monetization is more serious than other digital content businesses, such as webtoons, music sources, videos, and games. Since major portal sites distribute online news content free of charge to increase their traffics, customers have been accustomed to free news content; hence this makes online news providers more difficult to switch their policies on business models (i.e., monetization policy). As a result, most online news providers are highly dependent on the advertising business model, which can lead to increasing number of false, exaggerated, or sensational advertisements inside the news website to maximize their advertising revenue. To reduce this advertising dependencies, many online news providers had attempted to switch their 'free' readers to 'paid' users, but most of them failed. However, recently, some online news media have been successfully applying the Pay-What-You-Want (PWYW) payment model, which allows readers to voluntarily pay fees for their favorite news content. These successful cases shed some lights to the managers of online news content provider regarding that the PWYW model can serve as an alternative business model. In this study, therefore, we collected 379 online news articles from Ohmynews.com that has been successfully employing the PWYW model, and analyzed the comparative importance of systematic attributes of online news content on readers' voluntary payment. More specifically, we derived the six systematic attributes (i.e., Type of Article Title, Image Stimulation, Article Readability, Article Type, Dominant Emotion, and Article-Image Similarity) and three or four levels within each attribute based on previous studies. Then, we conducted content analysis to measure five attributes except Article Readability attribute, measured by Flesch readability score. Before conducting main content analysis, the face reliabilities of chosen attributes were measured by three doctoral level researchers with 37 sample articles, and inter-coder reliabilities of the three coders were verified. Then, the main content analysis was conducted for two months from March 2017 with 379 online news articles. All 379 articles were reviewed by the same three coders, and 65 articles that showed inconsistency among coders were excluded before employing conjoint analysis. Finally, we examined the comparative importance of those six systematic attributes (Study 1), and levels within each of the six attributes (Study 2) through conjoint analysis with 314 online news articles. From the results of conjoint analysis, we found that Article Readability, Article-Image Similarity, and Type of Article Title are the most significant factors affecting online news readers' voluntary payment. First, it can be interpreted that if the level of readability of an online news article is in line with the readers' level of readership, the readers will voluntarily pay more. Second, the similarity between the content of the article and the image within it enables the readers to increase the information acceptance and to transmit the message of the article more effectively. Third, readers expect that the article title would reveal the content of the article, and the expectation influences the understanding and satisfaction of the article. Therefore, it is necessary to write an article with an appropriate readability level, and use images and title well matched with the content to make readers voluntarily pay more. We also examined the comparative importance of levels within each attribute in more details. Based on findings of two studies, two major and nine minor propositions are suggested for future empirical research. This study has academic implications in that it is one of the first studies applying both content analysis and conjoint analysis together to examine readers' voluntary payment behavior, rather than their intention to pay. In addition, online news content creators, providers, and managers could find some practical insights from this research in terms of how they should produce news content to make readers voluntarily pay more for their online news content.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.