• Title/Summary/Keyword: Decision system analysis

Search Result 2,280, Processing Time 0.035 seconds

Predicting the Direction of the Stock Index by Using a Domain-Specific Sentiment Dictionary (주가지수 방향성 예측을 위한 주제지향 감성사전 구축 방안)

  • Yu, Eunji;Kim, Yoosin;Kim, Namgyu;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.95-110
    • /
    • 2013
  • Recently, the amount of unstructured data being generated through a variety of social media has been increasing rapidly, resulting in the increasing need to collect, store, search for, analyze, and visualize this data. This kind of data cannot be handled appropriately by using the traditional methodologies usually used for analyzing structured data because of its vast volume and unstructured nature. In this situation, many attempts are being made to analyze unstructured data such as text files and log files through various commercial or noncommercial analytical tools. Among the various contemporary issues dealt with in the literature of unstructured text data analysis, the concepts and techniques of opinion mining have been attracting much attention from pioneer researchers and business practitioners. Opinion mining or sentiment analysis refers to a series of processes that analyze participants' opinions, sentiments, evaluations, attitudes, and emotions about selected products, services, organizations, social issues, and so on. In other words, many attempts based on various opinion mining techniques are being made to resolve complicated issues that could not have otherwise been solved by existing traditional approaches. One of the most representative attempts using the opinion mining technique may be the recent research that proposed an intelligent model for predicting the direction of the stock index. This model works mainly on the basis of opinions extracted from an overwhelming number of economic news repots. News content published on various media is obviously a traditional example of unstructured text data. Every day, a large volume of new content is created, digitalized, and subsequently distributed to us via online or offline channels. Many studies have revealed that we make better decisions on political, economic, and social issues by analyzing news and other related information. In this sense, we expect to predict the fluctuation of stock markets partly by analyzing the relationship between economic news reports and the pattern of stock prices. So far, in the literature on opinion mining, most studies including ours have utilized a sentiment dictionary to elicit sentiment polarity or sentiment value from a large number of documents. A sentiment dictionary consists of pairs of selected words and their sentiment values. Sentiment classifiers refer to the dictionary to formulate the sentiment polarity of words, sentences in a document, and the whole document. However, most traditional approaches have common limitations in that they do not consider the flexibility of sentiment polarity, that is, the sentiment polarity or sentiment value of a word is fixed and cannot be changed in a traditional sentiment dictionary. In the real world, however, the sentiment polarity of a word can vary depending on the time, situation, and purpose of the analysis. It can also be contradictory in nature. The flexibility of sentiment polarity motivated us to conduct this study. In this paper, we have stated that sentiment polarity should be assigned, not merely on the basis of the inherent meaning of a word but on the basis of its ad hoc meaning within a particular context. To implement our idea, we presented an intelligent investment decision-support model based on opinion mining that performs the scrapping and parsing of massive volumes of economic news on the web, tags sentiment words, classifies sentiment polarity of the news, and finally predicts the direction of the next day's stock index. In addition, we applied a domain-specific sentiment dictionary instead of a general purpose one to classify each piece of news as either positive or negative. For the purpose of performance evaluation, we performed intensive experiments and investigated the prediction accuracy of our model. For the experiments to predict the direction of the stock index, we gathered and analyzed 1,072 articles about stock markets published by "M" and "E" media between July 2011 and September 2011.

Analysis of Metadata Standards of Record Management for Metadata Interoperability From the viewpoint of the Task model and 5W1H (메타데이터 상호운용성을 위한 기록관리 메타데이터 표준 분석 5W1H와 태스크 모델의 관점에서)

  • Baek, Jae-Eun;Sugimoto, Shigeo
    • The Korean Journal of Archival Studies
    • /
    • no.32
    • /
    • pp.127-176
    • /
    • 2012
  • Metadata is well recognized as one of the foundational factors in archiving and long-term preservation of digital resources. There are several metadata standards for records management, archives and preservation, e.g. ISAD(G), EAD, AGRkMs, PREMIS, and OAIS. Consideration is important in selecting appropriate metadata standards in order to design metadata schema that meet the requirements of a particular archival system. Interoperability of metadata with other systems should be considered in schema design. In our previous research, we have presented a feature analysis of metadata standards by identifying the primary resource lifecycle stages where each standard is applied. We have clarified that any single metadata standard cannot cover the whole records lifecycle for archiving and preservation. Through this feature analysis, we analyzed the features of metadata in the whole records lifecycle, and we clarified the relationships between the metadata standards and the stages of the lifecycle. In the previous study, more detailed analysis was left for future study. This paper proposes to analyze the metadata schemas from the viewpoint of tasks performed in the lifecycle. Metadata schemas are primarily defined to describe properties of a resource in accordance with the purposes of description, e.g. finding aids, records management, preservation and so forth. In other words, the metadata standards are resource- and purpose-centric, and the resource lifecycle is not explicitly reflected in the standards. There are no systematic methods for mapping between different metadata standards in accordance with the lifecycle. This paper proposes a method for mapping between metadata standards based on the tasks contained in the resource lifecycle. We first propose a Task Model to clarify tasks applied to resources in each stage of the lifecycle. This model is created as a task-centric model to identify features of metadata standards and to create mappings among elements of those standards. It is important to categorize the elements in order to limit the semantic scope of mapping among elements and decrease the number of combinations of elements for mapping. This paper proposes to use 5W1H (Who, What, Why, When, Where, How) model to categorize the elements. 5W1H categories are generally used for describing events, e.g. news articles. As performing a task on a resource causes an event and metadata elements are used in the event, we consider that the 5W1H categories are adequate to categorize the elements. By using these categories, we determine the features of every element of metadata standards which are AGLS, AGRkMS, PREMIS, EAD, OAIS and an attribute set extracted from DPC decision flow. Then, we perform the element mapping between the standards, and find the relationships between the standards. In this study, we defined a set of terms for each of 5W1H categories, which typically appear in the definition of an element, and used those terms to categorize the elements. For example, if the definition of an element includes the terms such as person and organization that mean a subject which contribute to create, modify a resource the element is categorized into the Who category. A single element can be categorized into one or more 5W1H categories. Thus, we categorized every element of the metadata standards using the 5W1H model, and then, we carried out mapping among the elements in each category. We conclude that the Task Model provides a new viewpoint for metadata schemas and is useful to help us understand the features of metadata standards for records management and archives. The 5W1H model, which is defined based on the Task Model, provides us a core set of categories to semantically classify metadata elements from the viewpoint of an event caused by a task.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

The Effects of Sentiment and Readability on Useful Votes for Customer Reviews with Count Type Review Usefulness Index (온라인 리뷰의 감성과 독해 용이성이 리뷰 유용성에 미치는 영향: 가산형 리뷰 유용성 정보 활용)

  • Cruz, Ruth Angelie;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.43-61
    • /
    • 2016
  • Customer reviews help potential customers make purchasing decisions. However, the prevalence of reviews on websites push the customer to sift through them and change the focus from a mere search to identifying which of the available reviews are valuable and useful for the purchasing decision at hand. To identify useful reviews, websites have developed different mechanisms to give customers options when evaluating existing reviews. Websites allow users to rate the usefulness of a customer review as helpful or not. Amazon.com uses a ratio-type helpfulness, while Yelp.com uses a count-type usefulness index. This usefulness index provides helpful reviews to future potential purchasers. This study investigated the effects of sentiment and readability on useful votes for customer reviews. Similar studies on the relationship between sentiment and readability have focused on the ratio-type usefulness index utilized by websites such as Amazon.com. In this study, Yelp.com's count-type usefulness index for restaurant reviews was used to investigate the relationship between sentiment/readability and usefulness votes. Yelp.com's online customer reviews for stores in the beverage and food categories were used for the analysis. In total, 170,294 reviews containing information on a store's reputation and popularity were used. The control variables were the review length, store reputation, and popularity; the independent variables were the sentiment and readability, while the dependent variable was the number of helpful votes. The review rating is the moderating variable for the review sentiment and readability. The length is the number of characters in a review. The popularity is the number of reviews for a store, and the reputation is the general average rating of all reviews for a store. The readability of a review was calculated with the Coleman-Liau index. The sentiment is a positivity score for the review as calculated by SentiWordNet. The review rating is a preference score selected from 1 to 5 (stars) by the review author. The dependent variable (i.e., usefulness votes) used in this study is a count variable. Therefore, the Poisson regression model, which is commonly used to account for the discrete and nonnegative nature of count data, was applied in the analyses. The increase in helpful votes was assumed to follow a Poisson distribution. Because the Poisson model assumes an equal mean and variance and the data were over-dispersed, a negative binomial distribution model that allows for over-dispersion of the count variable was used for the estimation. Zero-inflated negative binomial regression was used to model count variables with excessive zeros and over-dispersed count outcome variables. With this model, the excess zeros were assumed to be generated through a separate process from the count values and therefore should be modeled as independently as possible. The results showed that positive sentiment had a negative effect on gaining useful votes for positive reviews but no significant effect on negative reviews. Poor readability had a negative effect on gaining useful votes and was not moderated by the review star ratings. These findings yield considerable managerial implications. The results are helpful for online websites when analyzing their review guidelines and identifying useful reviews for their business. Based on this study, positive reviews are not necessarily helpful; therefore, restaurants should consider which type of positive review is helpful for their business. Second, this study is beneficial for businesses and website designers in creating review mechanisms to know which type of reviews to highlight on their websites and which type of reviews can be beneficial to the business. Moreover, this study highlights the review systems employed by websites to allow their customers to post rating reviews.

A Study on the Factors that Affect the Investment Behavior in Financial Investment Products : Focused on the Effect of Adjustment in Investment Consulting Service (금융투자상품 투자행동에 영향을 미치는 요인에 관한 연구: 투자상담서비스의 조절효과를 중심으로)

  • Lee, Kye Woung;Ha, Kyu Soo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.9 no.5
    • /
    • pp.53-68
    • /
    • 2014
  • This study is aimed at analyzing the factors that affect the behaviors of employee's investment, such as a decision making process in a variety of views and proving the extent of how those factors influence on their investment. The basic assumption is that the preceding factors that can be determined by the personal investment propensity, a psychological factor asserted by Behavior Financial Theory and financial-economic and social environment. This study uses Hershey's Investment Behavior Model(2007) as the main analysis tool to explain the investment behavior of individuals and deals with personal investment inclination in the psychological perspective of overconfidence, self-control and the risk tolerance propensity and add the financial and economic factors in terms of financial literacy and economic distress. Also the new preceding social environmental factors like social interaction and the effect of reference group are added to make this research to be more precise. This study analyze the adjustment effect of professional invest-consulting service that affect the fluctuation influence between the individual variables(those factors) and subordination variable(the level of investment satisfaction). The study reveals that overconfidence and self-control in direct ways have a positive effect on the level of investment satisfaction in terms of investment behavior and economic distress has a negative effect on the level of investment satisfaction. The adjustment effect provided by financial experts in investment consulting service is affirmed as the critical factor that increase the influence between self-control and the level of investment satisfaction. To conclude, the research reveals that the psychological factors are the main criteria when the workers as employees have to make investment decisions. To make investors be reasonable, a systematic financial education system provided by experts is needed from the early adolescent stages and financial companies should develop the relevant services of consulting service department as a key financial sector and financial investment products and consulting program and marketing tool pertinent to investors ages, vocational traits and their inclinations.

  • PDF

HACCP Model for Quality Control of Sushi Production in the Eine Japanese Restaurants in Korea (일본전문식당의 급식품질 개선을 위한 HACCP 시스템 적용 연구)

  • 김혜경;이복희;김인호;조경동
    • Journal of the East Asian Society of Dietary Life
    • /
    • v.13 no.1
    • /
    • pp.25-38
    • /
    • 2003
  • This study was conducted to establish the microbiological quality standards applying the HACCP system on sushi items of Japanese restaurant in Korea. The study evaluated hygienic conditions of kitchen and workers, pH time-temperature relationship, and microbial assessments during whole process of sushi making in 2001. Overall hygienic conditions were normal for both kitchen and for workers by 3 point scale, but hygienic controls against the cross-contamination were still needed. Each process of sushi making was performed under the risk of microbial contamination, since pH value of most of ingredients was over pH 4.6 and also production time(3.5~6 hrs) were long enough to cause problems. Microorganisms were high enough to cause foodborne illness ranged 8.0$\times$10$^2$~3.3$\times$10$^{6}$ CFU/g of TPC and 1.0$\times$10$^1$~1.6$\times$10$^3$CFU/g of coliforms, although TPC, coliforms and Staphylcoccus aureus were within the standard limits (TPC 10$^2$~10$^{6}$ CFU/g, coliforms 10$^3$CFU/g). However, Salmonella and Vibrio parahaemolyticus were not detected. High populations TPC and coliforms were also found in the cooks' hands and cooking utensils(TPC 10$^2$~10$^{6}$ CFU/100cm$^2$and Coliforms 10$^1$~10$^3$CFU/100cm$^2$). Based on the CCP decision tree analysis, the CCPs were the holding steps far six sushi production line except the tuna and the thawing step for tuna sushi. In conclusion, overall state of sushi production was fairly good but much improvement was still needed.

  • PDF

Evaluation of Scattered Dose to the Contralateral Breast by Separating Effect of Medial Tangential Field and Lateral Tangential Field: A Comparison of Common Primary Breast Irradiation Techniques (유방암 접선조사 치료 방법에 대한 반대쪽 유방에서의 산란선량 평가)

  • Ban, Tae-Joon;Jeon, Soo-Dong;Kwak, Jung-Won;Baek, Geum-Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.24 no.2
    • /
    • pp.183-188
    • /
    • 2012
  • Purpose: The concern of improving the quality of life and reducing side effects related to cancer treatment has been a subject of interest in recent years with advances in cancer treatment techniques and increasing survival time. This study is an analysis of differing scattered dose to the contralateral breast using common different treatment techniques. Materials and Methods: Eclipse 10.0 (Varian, USA) based $30^{\circ}$ EDW (Enhanced dynamic wedge) plan, $15^{\circ}$ wedge plan, $30^{\circ}$ wedge plan, Open beam plan, FiF (field in field) plan were established using CT image of breast phantom which in our hospital. Each treatment plan were designed to exposure 400 cGy using CL-6EX (VARIAN, USA) and we measured scattered dose at 1 cm, 3 cm, 5 cm, 9 cm away from medial side of the phantom at 1 cm depth using ionization chamber (FC 65G, IBA). We carried out measurement by separating effect of medial tangential field and lateral tangential field and analyze. Results: The evaluation of scattered dose to contralateral breast, $30^{\circ}$ EDW plan, $15^{\circ}$ wedge plan, $30^{\circ}$ wedge plan, Open beam plan, FIF plan showed 6.55%, 4.72%, 2.79%, 2.33%, 1.87% about prescription dose of each treatment plan. The result of scattered dose measurement by separating effect of medial tangential field and lateral tangential field results were 4.94%, 3.33%, 1.55%, 1.17%, 0.77% about prescription dose at medial tangential field and 1.61%, 1.40%, 1.24%, 1.16%, 1.10% at lateral tangential field along with measured distance. Conclusion: In our experiment, FiF treatment technique generates minimum of scattered dose to contralateral breast which come from mainly phantom scatter factor. Whereas $30^{\circ}$ wedge plan generates maximum of scattered doses to contralateral breast and 3.3% of them was scattered from gantry head. The description of treatment planning system showed a loss of precision for a relatively low scatter dose region. Scattered dose out of Treatment radiation field is relatively lower than prescription dose but, in decision of radiation therapy, it cannot be ignored that doses to contralateral breast are related with probability of secondary cancer.

  • PDF

Analyzing the Effect of Online media on Overseas Travels: A Case study of Asian 5 countries (해외 출국에 영향을 미치는 온라인 미디어 효과 분석: 아시아 5개국을 중심으로)

  • Lee, Hea In;Moon, Hyun Sil;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.53-74
    • /
    • 2018
  • Since South Korea has an economic structure that has a characteristic which market-dependent on overseas, the tourism industry is considered as a very important industry for the national economy, such as improving the country's balance of payments or providing income and employment increases. Accordingly, the necessity of more accurate forecasting on the demand in the tourism industry has been raised to promote its industry. In the related research, economic variables such as exchange rate and income have been used as variables influencing tourism demand. As information technology has been widely used, some researchers have also analyzed the effect of media on tourism demand. It has shown that the media has a considerable influence on traveler's decision making, such as choosing an outbound destination. Furthermore, with the recent availability of online information searches to obtain the latest information and two-way communication in social media, it is possible to obtain up-to-date information on travel more quickly than before. The information in online media such as blogs can naturally create the Word-of-Mouth effect by sharing useful information, which is called eWOM. Like all other service industries, the tourism industry is characterized by difficulty in evaluating its values before it is experienced directly. And furthermore, most of the travelers tend to search for more information in advance from various sources to reduce the perceived risk to the destination, so they can also be influenced by online media such as online news. In this study, we suggested that the number of online media posting, which causes the effects of Word-of-Mouth, may have an effect on the number of outbound travelers. We divided online media into public media and private media according to their characteristics and selected online news as public media and blog as private media, one of the most popular social media in tourist information. Based on the previous studies about the eWOM effects on online news and blog, we analyzed a relationship between the volume of eWOM and the outbound tourism demand through the panel model. To this end, we collected data on the number of national outbound travelers from 2007 to 2015 provided by the Korea Tourism Organization. According to statistics, the highest number of outbound tourism demand in Korea are China, Japan, Thailand, Hong Kong and the Philippines, which are selected as a dependent variable in this study. In order to measure the volume of eWOM, we collected online news and blog postings for the same period as the number of outbound travelers in Naver, which is the largest portal site in South Korea. In this study, a panel model was established to analyze the effect of online media on the demand of Korean outbound travelers and to identify that there was a significant difference in the influence of online media by each time and countries. The results of this study can be summarized as follows. First, the impact of the online news and blog eWOM on the number of outbound travelers was significant. We found that the number of online news and blog posting have an influence on the number of outbound travelers, especially the experimental result suggests that both the month that includes the departure date and the three months before the departure were found to have an effect. It is shown that online news and blog are online media that have a significant influence on outbound tourism demand. Next, we found that the increased volume of eWOM in online news has a negative effect on departure, while the increase in a blog has a positive effect. The result with the country-specific models would be the same. This paper shows that online media can be used as a new variable in tourism demand by examining the influence of the eWOM effect of the online media. Also, we found that both social media and news media have an important role in predicting and managing the Korean tourism demand and that the influence of those two media appears different depending on the country.

Retrospective Evaluation of Discrepancies between Radiological and Pathological Size of Hepatocellular Carcinoma Masses

  • Tian, Fei;Wu, Jian-Xiong;Rong, Wei-Qi;Wang, Li-Ming;Wu, Fan;Yu, Wei-Bo;An, Song-Lin;Liu, Fa-Qiang;Feng, Li;Liu, Yun-He
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.21
    • /
    • pp.9487-9494
    • /
    • 2014
  • Background: The size of a hepatic neoplasm is critical for staging, prognosis and selection of appropriate treatment. Our study aimed to compare the radiological size of solid hepatocellular carcinoma (HCC) masses on magnetic resonance imaging (MRI) with the pathological size in a Chinese population, and to elucidate discrepancies. Materials and Methods: A total of 178 consecutive patients diagnosed with HCC who underwent curative hepatic resection after enhanced MRI between July 2010 and October 2013 were retrospectively identified and analyzed. Pathological data of the whole removed tumors wereassessed and differences between radiological and pathological tumor size were identified. All patients were restaged using a modified Tumor-Node-Metastasis (TNM) staging system postoperatively according to the maximum diameter alteration. The lesions were classified as hypo-staged, iso-staged or hyper-staged for qualitative assessment. In the quantitative analysis, the relative pre and postoperative tumor size contrast ratio ($%{\Delta}size$) was also computed according to size intervals. In addition, the relationship between radiological and pathological tumor diameter variation and histologic grade was analyzed. Results: Pathological examination showed 85 (47.8%) patients were overestimated, 82 (46.1%) patients underestimated, while accurate measurement by MRI was found in 11 (6.2%) patients. Among the total subjects, 14 (7.9%) patients were hypo-staged and 15 (8.4%) were hyper-staged post-operatively. Accuracy of MRI for calculation and characterized staging was related to the lesion size, ranging from 83.1% to 87.4% (<2cm to ${\geq}5cm$, p=0.328) and from 62.5% to 89.1% (cT1 to cT4, p=0.006), respectively. Overall, MRI misjudged pathological size by 6.0 mm (p=0.588 ), and the greatest difference was observed in tumors <2cm (3.6 mm, $%{\Delta}size=16.9%$, p=0.028). No statistically significant difference was observed for moderately differentiated HCC (5.5mm, p=0.781). However, for well differentiated and poorly differentiated cases, radiographic tumor maximum diameter was significantly larger than the pathological maximum diameter by 3.15 mm and underestimated by 4.51 mm, respectively (p=0.034 and 0.020). Conclusions: A preoperative HCC tumor size measurement using MRI can provide relatively acceptable accuracy but may give rise to discrepancy in tumors in a certain size range or histologic grade. In pathological well differentiated subjects, the pathological tumor size was significantly overestimated, but underestimated in poorly differentiated HCC. The difference between radiological and pathological tumor size was greatest for tumors <2 cm. For some HCC patients, the size difference may have implications for the decision of resection, transplantation, ablation, or arterially directed therapy, and should be considered in staging or selecting the appropriate treatment tactics.