• Title/Summary/Keyword: Internet data

Search Result 10,009, Processing Time 0.048 seconds

Teacher's Practice of Activity Materials in the Housing Area of Middle School Technology & Home Economics Textbook (중학교 교사의 기술.가정 주생활영역 활동자료 활용실태)

  • Lee, Young-Doo;Cho, Jea-Soon
    • Journal of Korean Home Economics Education Association
    • /
    • v.20 no.4
    • /
    • pp.157-171
    • /
    • 2008
  • The year of 2007 Reformed Curriculum encourages various activity materials in the textbook facilitate students oriented self-help learning. The purpose of this paper is to find out how much the activity materials in housing area of middle school Technology and Home Economics are practiced in the class and why they are used or not used. The data were collected from 253 middle school teachers who had ever taught the housing unit in any of 6 textbooks. The analyses indicated that the most frequent teaching methode was lecture based on the textbook and internet data focused on the figures and contents of the individual textbook. The average rate of practicing the activity materials was differ by textbooks and the characteristics of the materials such as type of materials, feature of non sentence materials, and type of activity. The main two reasons to practice the activity materials were it's adequacy to class goals and application to everyday life. Low interests of students and shortage of time were the two main reasons why not used the materials. Textbook writers should consider these reasons as well as the characteristics of activity materials practiced in the class by the teachers in order to meet the goals of the reformed as well as current curricula.

  • PDF

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

An Investigation on Expanding Co-occurrence Criteria in Association Rule Mining (연관규칙 마이닝에서의 동시성 기준 확장에 대한 연구)

  • Kim, Mi-Sung;Kim, Nam-Gyu;Ahn, Jae-Hyeon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.23-38
    • /
    • 2012
  • There is a large difference between purchasing patterns in an online shopping mall and in an offline market. This difference may be caused mainly by the difference in accessibility of online and offline markets. It means that an interval between the initial purchasing decision and its realization appears to be relatively short in an online shopping mall, because a customer can make an order immediately. Because of the short interval between a purchasing decision and its realization, an online shopping mall transaction usually contains fewer items than that of an offline market. In an offline market, customers usually keep some items in mind and buy them all at once a few days after deciding to buy them, instead of buying each item individually and immediately. On the contrary, more than 70% of online shopping mall transactions contain only one item. This statistic implies that traditional data mining techniques cannot be directly applied to online market analysis, because hardly any association rules can survive with an acceptable level of Support because of too many Null Transactions. Most market basket analyses on online shopping mall transactions, therefore, have been performed by expanding the co-occurrence criteria of traditional association rule mining. While the traditional co-occurrence criteria defines items purchased in one transaction as concurrently purchased items, the expanded co-occurrence criteria regards items purchased by a customer during some predefined period (e.g., a day) as concurrently purchased items. In studies using expanded co-occurrence criteria, however, the criteria has been defined arbitrarily by researchers without any theoretical grounds or agreement. The lack of clear grounds of adopting a certain co-occurrence criteria degrades the reliability of the analytical results. Moreover, it is hard to derive new meaningful findings by combining the outcomes of previous individual studies. In this paper, we attempt to compare expanded co-occurrence criteria and propose a guideline for selecting an appropriate one. First of all, we compare the accuracy of association rules discovered according to various co-occurrence criteria. By doing this experiment we expect that we can provide a guideline for selecting appropriate co-occurrence criteria that corresponds to the purpose of the analysis. Additionally, we will perform similar experiments with several groups of customers that are segmented by each customer's average duration between orders. By this experiment, we attempt to discover the relationship between the optimal co-occurrence criteria and the customer's average duration between orders. Finally, by a series of experiments, we expect that we can provide basic guidelines for developing customized recommendation systems. Our experiments use a real dataset acquired from one of the largest internet shopping malls in Korea. We use 66,278 transactions of 3,847 customers conducted during the last two years. Overall results show that the accuracy of association rules of frequent shoppers (whose average duration between orders is relatively short) is higher than that of causal shoppers. In addition we discover that with frequent shoppers, the accuracy of association rules appears very high when the co-occurrence criteria of the training set corresponds to the validation set (i.e., target set). It implies that the co-occurrence criteria of frequent shoppers should be set according to the application purpose period. For example, an analyzer should use a day as a co-occurrence criterion if he/she wants to offer a coupon valid only for a day to potential customers who will use the coupon. On the contrary, an analyzer should use a month as a co-occurrence criterion if he/she wants to publish a coupon book that can be used for a month. In the case of causal shoppers, the accuracy of association rules appears to not be affected by the period of the application purposes. The accuracy of the causal shoppers' association rules becomes higher when the longer co-occurrence criterion has been adopted. It implies that an analyzer has to set the co-occurrence criterion for as long as possible, regardless of the application purpose period.

Development of Customer Sentiment Pattern Map for Webtoon Content Recommendation (웹툰 콘텐츠 추천을 위한 소비자 감성 패턴 맵 개발)

  • Lee, Junsik;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.67-88
    • /
    • 2019
  • Webtoon is a Korean-style digital comics platform that distributes comics content produced using the characteristic elements of the Internet in a form that can be consumed online. With the recent rapid growth of the webtoon industry and the exponential increase in the supply of webtoon content, the need for effective webtoon content recommendation measures is growing. Webtoons are digital content products that combine pictorial, literary and digital elements. Therefore, webtoons stimulate consumer sentiment by making readers have fun and engaging and empathizing with the situations in which webtoons are produced. In this context, it can be expected that the sentiment that webtoons evoke to consumers will serve as an important criterion for consumers' choice of webtoons. However, there is a lack of research to improve webtoons' recommendation performance by utilizing consumer sentiment. This study is aimed at developing consumer sentiment pattern maps that can support effective recommendations of webtoon content, focusing on consumer sentiments that have not been fully discussed previously. Metadata and consumer sentiments data were collected for 200 works serviced on the Korean webtoon platform 'Naver Webtoon' to conduct this study. 488 sentiment terms were collected for 127 works, excluding those that did not meet the purpose of the analysis. Next, similar or duplicate terms were combined or abstracted in accordance with the bottom-up approach. As a result, we have built webtoons specialized sentiment-index, which are reduced to a total of 63 emotive adjectives. By performing exploratory factor analysis on the constructed sentiment-index, we have derived three important dimensions for classifying webtoon types. The exploratory factor analysis was performed through the Principal Component Analysis (PCA) using varimax factor rotation. The three dimensions were named 'Immersion', 'Touch' and 'Irritant' respectively. Based on this, K-Means clustering was performed and the entire webtoons were classified into four types. Each type was named 'Snack', 'Drama', 'Irritant', and 'Romance'. For each type of webtoon, we wrote webtoon-sentiment 2-Mode network graphs and looked at the characteristics of the sentiment pattern appearing for each type. In addition, through profiling analysis, we were able to derive meaningful strategic implications for each type of webtoon. First, The 'Snack' cluster is a collection of webtoons that are fast-paced and highly entertaining. Many consumers are interested in these webtoons, but they don't rate them well. Also, consumers mostly use simple expressions of sentiment when talking about these webtoons. Webtoons belonging to 'Snack' are expected to appeal to modern people who want to consume content easily and quickly during short travel time, such as commuting time. Secondly, webtoons belonging to 'Drama' are expected to evoke realistic and everyday sentiments rather than exaggerated and light comic ones. When consumers talk about webtoons belonging to a 'Drama' cluster in online, they are found to express a variety of sentiments. It is appropriate to establish an OSMU(One source multi-use) strategy to extend these webtoons to other content such as movies and TV series. Third, the sentiment pattern map of 'Irritant' shows the sentiments that discourage customer interest by stimulating discomfort. Webtoons that evoke these sentiments are hard to get public attention. Artists should pay attention to these sentiments that cause inconvenience to consumers in creating webtoons. Finally, Webtoons belonging to 'Romance' do not evoke a variety of consumer sentiments, but they are interpreted as touching consumers. They are expected to be consumed as 'healing content' targeted at consumers with high levels of stress or mental fatigue in their lives. The results of this study are meaningful in that it identifies the applicability of consumer sentiment in the areas of recommendation and classification of webtoons, and provides guidelines to help members of webtoons' ecosystem better understand consumers and formulate strategies.

The Analysis for Minimum Infective Dose of Foodborne Disease Pathogens by Meta-analysis (메타분석에 의한 식중독 원인 미생물들의 최소감염량 분석)

  • Park, Myoung Su;Cho, June Ill;Lee, Soon Ho;Bahk, Gyung Jin
    • Journal of Food Hygiene and Safety
    • /
    • v.29 no.4
    • /
    • pp.305-311
    • /
    • 2014
  • Minimum infective dose (MID) data has been recognized as an important and absolutely needed in quantitative microbiological assessment (QMRA). In this study, we performed a comprehensive literature review and meta-analysis to better quantify this association. The meta-analysis applied a final selection of 82 published papers for total 12 species foodborne disease pathogens (bacteria 9, virus 2, and parasite 1 species) which were identified and classified based on the dose-response models related to QMRA studies from PubMed, ScienceDirect database and internet websites during 1980-2012. The main search keywords used the combination "food", "foodborne disease pathogen", "minimum infective dose", and "quantitative microbiological risk assessment". The appropriate minimum infective dose for B. cereus, C. jejuni, Cl. perfringens, Pathogenic E. coli (EHEC, ETEC, EPEC, EIEC), L. monocytogenes, Salmonella spp., Shigella spp., S. aureus, V. parahaemolyticus, Hepatitis A virus, Noro virus, and C. pavum were $10^5cells/g$ (fi = 0.32), 500 cells/g (fi = 0.57), $10^7cells/g$ (fi = 0.56), 10 cells/g (fi = 0.47) / $10^8cells/g$ (fi = 0.71) / $10^6cells/g$ (fi = 0.70) / $10^6cells/g$ (fi = 0.60), $10^2{\sim}10^3cells/g$ (fi = 0.23), 10 cells/g (fi = 0.30), 100 cells/g (fi = 0.32), $10^5cells/g$ (fi = 0.45), $10^6cells/g$ (fi = 0.64), $10{\sim}10^2particles/g$ (fi = 0.33), 10 particles/g (fi = 0.71), and $10{\sim}10^2oocyst/g$ (fi = 0.33), respectively. Therefore, these results provide the preliminary data necessary for the development of foodborne pathogens QMRA.

An Introduction of Korean Soil Information System (한국 토양정보시스템 소개)

  • Hong, S. Young;Zhang, Yong-Seon;Hyun, Byung-Keun;Sonn, Yeon-Kyu;Kim, Yi-Hyun;Jung, Sug-Jae;Park, Chan-Won;Song, Kwan-Cheol;Jang, Byoung-Choon;Choe, Eun-Young;Lee, Ye-Jin;Ha, Sang-Keun;Kim, Myung-Suk;Lee, Jong-Sik;Jung, Goo-Bok;Ko, Byong-Gu;Kim, Gun-Yeob
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.42 no.1
    • /
    • pp.21-28
    • /
    • 2009
  • Detailed information on soil characteristics is of great importance for the use and conservation of soil resources that are essential for human welfare and ecosystem sustainability. This paper introduces soil inventory of Korea focusing on national soil database establishment, information systems, use, and future direction for natural resources management. Different scales of soil maps surveyed and soil test data collected by RDA (Rural Development Administration) were computerized to construct digital soil maps and database. Soil chemical properties and heavy metal concentrations in agricultural soils including vulnerable agricultural soils were investigated regularly at fixed sampling points. Internet-based information systems for soil and agro-environmental resources were developed based on 'National Soil Survey Projects' for managing soil resources and for providing soil information to the public, and 'Agroenvironmental Change Monitoring Project' to monitor spatial and temporal changes of agricultural environment will be opened soon. Soils data has a great potential of further application in estimation of soil carbon storage, water capacity, and soil loss. Digital mapping of soil and environment using state-of-the-art and emerging technologies with a pedometrics concept will lead to future direction.

Permanent Preservation and Use of Historical Archives : Preservation Issues Digitization of Historical Collection (역사기록물(Archives)의 항구적인 보존화 이용 : 보존전략과 디지털정보화)

  • Lee, Sang-min
    • The Korean Journal of Archival Studies
    • /
    • no.1
    • /
    • pp.23-76
    • /
    • 2000
  • In this paper, I examined what have been researched and determined about preservation strategy and selection of preservation media in the western archival community. Archivists have primarily been concerned with 'preservation' and 'use' of archival materials worth of being preserved permanently. In the new information era, preservation and use of archival materials were faced with new challenge. Life expectancy of paper records was shortened due to acidification and brittleness of the modem papers. Also emergence of information technology affects the traditional way of preservation and use of archival materials. User expectations are becoming so high technology-oriented and so complicated as to make archivists act like information managers using computer technology rather than traditional archival handicraft. Preservation strategy plays an important role in archival management as well as information management. For a cost-effective management of archives and archival institutions, preservation strategy is a must. The preservation strategy encompasses all aspects of archival preservation process and practices, from selection of archives, appraisal, inventorying, arrangement, description, conservation, microfilming or digitization, archival buildings, and access service. Those archival functions should be considered in their relations to each other to ensure proper preservation of archival materials. In the integrated preservation strategy, 'preservation' and 'use' should be combined and fulfilled without sacrificing the other. Preservation strategy planning is essential to determine the policies of archives to preserve their holdings safe and provide people with a maximum access in most effective ways. Preservation microfilming is to ensure permanent preservation of information held in important archival materials. To do this, a detailed standardization has been developed to guarantee the permanence of microfilm as well as its product quality. Silver gelatin film can last up to 500 years in the optimum storage environment and the most viable option for permanent preservation media. ISO and ANIS developed such standards for the quality of microfilms and microfilming technology. Preservation microfilming guidelines was also developed to ensure effective archival management and picture quality of microfilms. It is essential to assess the need of preservation microfilming. Limit in resources always put a restraint on preservation management. Appraisal (and selection) of what to be preserved was the most important part of preservation microfilming. In addition, microfilms with standard quality can be scanned to produce quality digital images for instant use through internet. As information technology develops, archivists began to utilize information technology to make preservation easier and more economical, and to promote use of archival materials through computer communication network. Digitization was introduced to provide easy and universal access to unique archives, and its large capacity of preserving archival data seems very promising. However, digitization, i.e., transferring images of records to electronic codes, still, needs to be standardized. Digitized data are electronic records, and st present electronic records are very unstable and not to be preserved permanently. Digital media including optical disks materials have not been proved as reliable media for permanent preservation. Due to their chemical coating and physical character using light, they are not stable and can be preserved at best 100 years in the optimum storage environment. Most CD-R can last only 20 years. Furthermore, obsolescence of hardware and software makes hard to reproduce digital images made from earlier versions. Even if when reformatting is possible, the cost of refreshing or upgrading of digital images is very expensive and the very process has to be done at least every five to ten years. No standard for this obsolescence of hardware and software has come into being yet. In short, digital permanence is not a fact, but remains to be uncertain possibility. Archivists must consider in their preservation planning both risk of introducing new technology and promising possibility of new technology at the same time. In planning digitization of historical materials, archivists should incorporate planning for maintaining digitized images and reformatting them in the coming generations of new applications. Without the comprehensive planning, future use of the expensive digital images will become unavailable. And that is a loss of information, and a final failure of both 'preservation' and 'use' of archival materials. As peter Adelstein said, it is wise to be conservative when considerations of conservations are involved.

Design and Implementation of a Web Application Firewall with Multi-layered Web Filter (다중 계층 웹 필터를 사용하는 웹 애플리케이션 방화벽의 설계 및 구현)

  • Jang, Sung-Min;Won, Yoo-Hun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.157-167
    • /
    • 2009
  • Recently, the leakage of confidential information and personal information is taking place on the Internet more frequently than ever before. Most of such online security incidents are caused by attacks on vulnerabilities in web applications developed carelessly. It is impossible to detect an attack on a web application with existing firewalls and intrusion detection systems. Besides, the signature-based detection has a limited capability in detecting new threats. Therefore, many researches concerning the method to detect attacks on web applications are employing anomaly-based detection methods that use the web traffic analysis. Much research about anomaly-based detection through the normal web traffic analysis focus on three problems - the method to accurately analyze given web traffic, system performance needed for inspecting application payload of the packet required to detect attack on application layer and the maintenance and costs of lots of network security devices newly installed. The UTM(Unified Threat Management) system, a suggested solution for the problem, had a goal of resolving all of security problems at a time, but is not being widely used due to its low efficiency and high costs. Besides, the web filter that performs one of the functions of the UTM system, can not adequately detect a variety of recent sophisticated attacks on web applications. In order to resolve such problems, studies are being carried out on the web application firewall to introduce a new network security system. As such studies focus on speeding up packet processing by depending on high-priced hardware, the costs to deploy a web application firewall are rising. In addition, the current anomaly-based detection technologies that do not take into account the characteristics of the web application is causing lots of false positives and false negatives. In order to reduce false positives and false negatives, this study suggested a realtime anomaly detection method based on the analysis of the length of parameter value contained in the web client's request. In addition, it designed and suggested a WAF(Web Application Firewall) that can be applied to a low-priced system or legacy system to process application data without the help of an exclusive hardware. Furthermore, it suggested a method to resolve sluggish performance attributed to copying packets into application area for application data processing, Consequently, this study provide to deploy an effective web application firewall at a low cost at the moment when the deployment of an additional security system was considered burdened due to lots of network security systems currently used.

Simultaneous Effect between eWOM and Revenues: Korea Movie Industry (온라인 구전과 영화 매출 간 상호영향에 관한 연구: 한국 영화 산업을 중심으로)

  • Bae, Jungho;Shim, Bum Jun;Kim, Byung-Do
    • Asia Marketing Journal
    • /
    • v.12 no.2
    • /
    • pp.1-25
    • /
    • 2010
  • Motion pictures are so typical experience goods that consumers tend to look for more credible information. Hence, movie audiences consider movie viewers' reviews more important than the information provided by the film distributor. Recently many portal sites allow consumers to post their reviews and opinions so that other people check the number of consumer reviews and scores before going to the theater. There are a few previous researches studying the electronic word of mouth(eWOM) effect in the movie industry. They found that the volume of eWOM influenced the revenue of the movie significantly but the valence of eWOM did not affect it much (Liu 2006). The goal of our research is also to investigate the eWOM effects in general. But our research is different from the previous studies in several aspects. First, we study the eWOM effect in Korean movie industry. In other words, we would like to check whether we can generalize the results of the previous research across countries. The similar econometric models are applied to Korean movie data that include 746,282 consumer reviews on 439 movies. Our results show that both the valence(RATING) and the volume(LNMSG) of the eWOM influence weekly movie revenues. This result is different from the previous research findings that the volume only influences the revenue. We conjectured that the difference of self construal between Asian and American culture may explain this difference (Kitayama 1991). Asians including Koreans have more interdependent self construal than American, so that they are easily affected by other people's thought and suggestion. Hence, the valence of the eWOM affects Koreans' choice of the movie. Second, we find the critical defect of the previous eWOM models and, hence, attempt to correct it. The previous eWOM model assumes that the volume of eWOM (LNMSG) is an independent variable affecting the movie revenue (LNREV). However, the revenue can influence the volume of the eWOM. We think that treating the volume of eWOM as an independent variable a priori is too restrictive. In order to remedy this problem, we employed a simultaneous equation in which the movie revenue and the volume of the eWOM can affect each other. That is, our eWOM model assumes that the revenue (LNREV) and the volume of eWOM (LNMSG) have endogenous relationship where they influence each other. The results from this simultaneous equation model showed that the movie revenue and the eWOM volume interact each other. The movie revenue influences the eWOM volume for the entire 8 weeks. The reverse effect is more complex. Both the volume and the valence of eWOM affect the revenue in the first week, but only the volume affect the revenue for the rest of the weeks. In the first week, consumers may be curious about the movie and look for various kinds of information they can trust, so that they use the both the quantity and quality of consumer reviews. But from the second week, the quality of the eWOM only affects the movie revenue, implying that the review ratings are more important than the number of reviews. Third, our results show that the ratings by professional critics (CRATING) had negative effect to the weekly movie revenue (LNREV). Professional critics often give low ratings to the blockbuster movies that do not have much cinematic quality. Experienced audiences who watch the movie for fun do not trust the professionals' ratings and, hence, tend to go for the low-rated movies by them. In summary, applied to the Korean movie ratings data and employing a simultaneous model, our results are different from the previous eWOM studies: 1) Koreans (or Asians) care about the others' evaluation quality more than quantity, 2) The volume of eWOM is not the cause but the result of the revenue, 3) Professional reviews can give the negative effect to the movie revenue.

  • PDF