• Title/Summary/Keyword: 지능정보 기반

Search Result 4,452, Processing Time 0.035 seconds

Development of New Variables Affecting Movie Success and Prediction of Weekly Box Office Using Them Based on Machine Learning (영화 흥행에 영향을 미치는 새로운 변수 개발과 이를 이용한 머신러닝 기반의 주간 박스오피스 예측)

  • Song, Junga;Choi, Keunho;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.67-83
    • /
    • 2018
  • The Korean film industry with significant increase every year exceeded the number of cumulative audiences of 200 million people in 2013 finally. However, starting from 2015 the Korean film industry entered a period of low growth and experienced a negative growth after all in 2016. To overcome such difficulty, stakeholders like production company, distribution company, multiplex have attempted to maximize the market returns using strategies of predicting change of market and of responding to such market change immediately. Since a film is classified as one of experiential products, it is not easy to predict a box office record and the initial number of audiences before the film is released. And also, the number of audiences fluctuates with a variety of factors after the film is released. So, the production company and distribution company try to be guaranteed the number of screens at the opining time of a newly released by multiplex chains. However, the multiplex chains tend to open the screening schedule during only a week and then determine the number of screening of the forthcoming week based on the box office record and the evaluation of audiences. Many previous researches have conducted to deal with the prediction of box office records of films. In the early stage, the researches attempted to identify factors affecting the box office record. And nowadays, many studies have tried to apply various analytic techniques to the factors identified previously in order to improve the accuracy of prediction and to explain the effect of each factor instead of identifying new factors affecting the box office record. However, most of previous researches have limitations in that they used the total number of audiences from the opening to the end as a target variable, and this makes it difficult to predict and respond to the demand of market which changes dynamically. Therefore, the purpose of this study is to predict the weekly number of audiences of a newly released film so that the stakeholder can flexibly and elastically respond to the change of the number of audiences in the film. To that end, we considered the factors used in the previous studies affecting box office and developed new factors not used in previous studies such as the order of opening of movies, dynamics of sales. Along with the comprehensive factors, we used the machine learning method such as Random Forest, Multi Layer Perception, Support Vector Machine, and Naive Bays, to predict the number of cumulative visitors from the first week after a film release to the third week. At the point of the first and the second week, we predicted the cumulative number of visitors of the forthcoming week for a released film. And at the point of the third week, we predict the total number of visitors of the film. In addition, we predicted the total number of cumulative visitors also at the point of the both first week and second week using the same factors. As a result, we found the accuracy of predicting the number of visitors at the forthcoming week was higher than that of predicting the total number of them in all of three weeks, and also the accuracy of the Random Forest was the highest among the machine learning methods we used. This study has implications in that this study 1) considered various factors comprehensively which affect the box office record and merely addressed by other previous researches such as the weekly rating of audiences after release, the weekly rank of the film after release, and the weekly sales share after release, and 2) tried to predict and respond to the demand of market which changes dynamically by suggesting models which predicts the weekly number of audiences of newly released films so that the stakeholders can flexibly and elastically respond to the change of the number of audiences in the film.

The Impact of O4O Selection Attributes on Customer Satisfaction and Loyalty: Focusing on the Case of Fresh Hema in China (O4O 선택속성이 고객만족도 및 고객충성도에 미치는 영향: 중국 허마셴셩 사례를 중심으로)

  • Cui, Chengguo;Yang, Sung-Byung
    • Knowledge Management Research
    • /
    • v.21 no.3
    • /
    • pp.249-269
    • /
    • 2020
  • Recently, as the online market has matured, it is facing many problems to prevent the growth. The most common problem is the homogenization of online products, which fails to increase the number of customers any more. Moreover, although the portion of the online market has increased significantly, it now becomes essential to expand offline for further development. In response, many online firms have recently sought to expand their businesses and marketing channels by securing offline spaces that can complement the limitations of online platforms, on top of their existing advantages of online channels. Based on their competitive advantage in terms of analyzing large volumes of customer data utilizing information technologies (e.g., big data and artificial intelligence), they are reinforcing their offline influence as well through this online for offline (O4O) business model. On the other hand, most of the existing research has primarily focused on online to offline (O2O) business model, and there is still a lack of research on O4O business models, which have been actively attempted in various industrial fields in recent years. Since a few of O4O-related studies have been conducted only in an experience marketing setting following a case study method, it is critical to conduct an empirical study on O4O selection attributes and their impact on customer satisfaction and loyalty. Therefore, focusing on China's representative O4O business model, 'Fresh Hema,' this study attempts to identify some key selection attributes specialized for O4O services from the customers' viewpoint and examine the impact of these attributes on customer satisfaction and loyalty. The results of the structural equation modeling (SEM) with 300 O4O (Fresh Hema) experienced customers, reveal that, out of seven O4O selection attributes, four (mobile app quality, mobile payment, product quality, and store facilities) have an impact on customer satisfaction, which also leads to customer loyalty (reuse intention, recommendation intention, and brand attachment). This study would help managers in an O4O area well adapt to rapidly changing customer needs and provide them with some guidelines for enhancing both customer satisfaction and loyalty by allocating more resources to more significant selection attributes, rather than less significant ones.

Characteristics and Implications of 4th Industrial Revolution Technology Innovation in the Service Industry (서비스 산업의 4차 산업혁명 기술 혁신 특성과 시사점)

  • Pyoung Yol Jang
    • Journal of Service Research and Studies
    • /
    • v.13 no.2
    • /
    • pp.114-129
    • /
    • 2023
  • In the era of the 4th industrial revolution, the importance of the 4th industrial revolution technology is increasing in the service industry. The purpose of this study is to identify the development and utilization status of the 4th industrial revolution technology in the service industry and to derive the characteristics and implications of the 4th industrial revolution technology innovation in the service industry. In this study, research and analysis were conducted based on the business activity survey data in order to identify the technological innovation characteristics of the 4th industrial revolution in the service industry. The 4th industrial revolution technology in the service industry was analyzed in terms of company ratio, technology development and utilization rate, development/utilization technology, technology application field, and technology development method. In addition, the trend of the 4th industrial revolution technology change in the service industry was also analyzed. The 4th industrial revolution technology utilization and development status of other industries was compared and analyzed. In particular, the service industry 4th industrial revolution technology innovation type was divided into 4 types from the perspective of the 4th industrial revolution company ratio and the 4th industrial revolution company ratio growth rate, and types for each service industry were derived. The characteristics and implications of the 4th industrial revolution technology innovation in the service industry were presented from nine perspectives. As a result of the study, it was found that companies in the service industry were developing or using 4th industrial revolution technologies more actively than companies in other industries, and it was analyzed that the gap was further widening. By service industry, information and communication, finance and insurance, and educational service showed relatively high rates of developing or utilizing 4th industrial revolution technologies. The service industries in which the share of 4th industrial revolution companies increased the most were real estate, education service, health and social welfare service. In particular, cloud, big data, and artificial intelligence were analyzed as the three core technologies of the fourth industrial revolution. The service industry can be classified into 4 types in terms of the 4th industrial revolution company ratio and growth rate, and service industry innovation measures that reflect the differentiated innovation characteristics of each type are needed.

A Study on the Efficient Human-Robot Interaction Style for a Map Building Process of a Home-service Robot (홈서비스로봇의 맵빌딩을 위한 효율적인 휴먼-로봇 상호작용방식에 대한 연구)

  • Lee, Woo-Hun;Kim, Yeon-Ji;Kim, Hyun-Jin;Yang, Gyun-Hye;Park, Yong-Kuk;Bang, Seok-Won
    • Archives of design research
    • /
    • v.18 no.2 s.60
    • /
    • pp.155-164
    • /
    • 2005
  • Home-service robots need to have sufficient spatial information about the surroundings for interacting with human intelligently and performing services efficiently. It is very important to investigate the efficient interaction style that supports map building task through human-robot collaboration. We first analyzed map building task with a cleaning robot and drew 4 design factors and tentative solutions, including map building procedure (task-preferred procedure/space- preferred procedure), LCD display installation (robot/robot+remote control), navigation method (push type/pull type), feedback modality(GUI/GUI+TTS). The design factors and tentative solutions were defined as independent variables and levels. This research investigated how those variables affect to the human task performance and behavior in map building tast. 8 kinds of experiment prototypes were built and usability test among 16 house wives was conducted for acquiring empirical data. As the experiment result, in terms of map building procedure, space-preferred procedure indicated better task performance than task-proffered procedure as we expected. For the LCD display installation factor, remote control with LCD display indicated higher task performance and subjective satisfaction. In robot navigation method, it was very difficult to find a significant difference between push type and pull type which contrary to our expectation. In fact, push type indicated higher subjective satisfaction. Also in feedback modality, we have acquired negative feedback an additional TTS operation guidance. It seems that robot's autonomy before achieving spatial information is rudiment condition which means users are just interacting with a mobile appliance. Thus they prefer remote-control-based interaction style in robot map building process as they used in traditional appliance control.

  • PDF

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

Critical Success Factor of Noble Payment System: Multiple Case Studies (새로운 결제서비스의 성공요인: 다중사례연구)

  • Park, Arum;Lee, Kyoung Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.59-87
    • /
    • 2014
  • In MIS field, the researches on payment services are focused on adoption factors of payment service using behavior theories such as TRA(Theory of Reasoned Action), TAM(Technology Acceptance Model), and TPB (Theory of Planned Behavior). The previous researches presented various adoption factors according to types of payment service, nations, culture and so on even though adoption factors of identical payment service were presented differently by researchers. The payment service industry relatively has strong path dependency to the existing payment methods so that the research results on the identical payment service are different due to payment culture of nation. This paper aims to suggest a successful adoption factor of noble payment service regardless of nation's culture and characteristics of payment and prove it. In previous researches, common adoption factors of payment service are convenience, ease of use, security, convenience, speed etc. But real cases prove the fact that adoption factors that the previous researches present are not always critical to success to penetrate a market. For example, PayByPhone, NFC based parking payment service, successfully has penetrated to early market and grown. In contrast, Google Wallet service failed to be adopted to users despite NFC based payment method which provides convenience, security, ease of use. As shown in upper case, there remains an unexplained aspect. Therefore, the present research question emerged from the question: "What is the more essential and fundamental factor that should takes precedence over factors such as provides convenience, security, ease of use for successful penetration to market". With these cases, this paper analyzes four cases predicted on the following hypothesis and demonstrates it. "To successfully penetrate a market and sustainably grow, new payment service should find non-customer of the existing payment service and provide noble payment method so that they can use payment method". We give plausible explanations for the hypothesis using multiple case studies. Diners club, Danal, PayPal, Square were selected as a typical and successful cases in each category of payment service. The discussion on cases is primarily non-customer analysis that noble payment service targets on to find the most crucial factor in the early market, we does not attempt to consider factors for business growth. We clarified three-tier non-customer of the payment method that new payment service targets on and elaborated how new payment service satisfy them. In case of credit card, this payment service target first tier of non-customer who can't pay for because they don't have any cash temporarily but they have regular income. So credit card provides an opportunity which they can do economic activities by delaying the date of payment. In a result of wireless phone payment's case study, this service targets on second of non-customer who can't use online payment because they concern about security or have to take a complex process and learn how to use online payment method. Therefore, wireless phone payment provides very convenient payment method. Especially, it made group of young pay for a little money without a credit card. Case study result of PayPal, online payment service, shows that it targets on second tier of non-customer who reject to use online payment service because of concern about sensitive information leaks such as passwords and credit card details. Accordingly, PayPal service allows users to pay online without a provision of sensitive information. Final Square case result, Mobile POS -based payment service, also shows that it targets on second tier of non-customer who can't individually transact offline because of cash's shortness. Hence, Square provides dongle which function as POS by putting dongle in earphone terminal. As a result, four cases made non-customer their customer so that they could penetrate early market and had been extended their market share. Consequently, all cases supported the hypothesis and it is highly probable according to 'analytic generation' that case study methodology suggests. We present for judging the quality of research designs the following. Construct validity, internal validity, external validity, reliability are common to all social science methods, these have been summarized in numerous textbooks(Yin, 2014). In case study methodology, these also have served as a framework for assessing a large group of case studies (Gibbert, Ruigrok & Wicki, 2008). Construct validity is to identify correct operational measures for the concepts being studied. To satisfy construct validity, we use multiple sources of evidence such as the academic journals, magazine and articles etc. Internal validity is to seek to establish a causal relationship, whereby certain conditions are believed to lead to other conditions, as distinguished from spurious relationships. To satisfy internal validity, we do explanation building through four cases analysis. External validity is to define the domain to which a study's findings can be generalized. To satisfy this, replication logic in multiple case studies is used. Reliability is to demonstrate that the operations of a study -such as the data collection procedures- can be repeated, with the same results. To satisfy this, we use case study protocol. In Korea, the competition among stakeholders over mobile payment industry is intensifying. Not only main three Telecom Companies but also Smartphone companies and service provider like KakaoTalk announced that they would enter into mobile payment industry. Mobile payment industry is getting competitive. But it doesn't still have momentum effect notwithstanding positive presumptions that will grow very fast. Mobile payment services are categorized into various technology based payment service such as IC mobile card and Application payment service of cloud based, NFC, sound wave, BLE(Bluetooth Low Energy), Biometric recognition technology etc. Especially, mobile payment service is discontinuous innovations that users should change their behavior and noble infrastructure should be installed. These require users to learn how to use it and cause infra-installation cost to shopkeepers. Additionally, payment industry has the strong path dependency. In spite of these obstacles, mobile payment service which should provide dramatically improved value as a products and service of discontinuous innovations is focusing on convenience and security, convenience and so on. We suggest the following to success mobile payment service. First, non-customers of the existing payment service need to be identified. Second, needs of them should be taken. Then, noble payment service provides non-customer who can't pay by the previous payment method to payment method. In conclusion, mobile payment service can create new market and will result in extension of payment market.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

A Review of the Neurocognitive Mechanisms for Mathematical Thinking Ability (수학적 사고력에 관한 인지신경학적 연구 개관)

  • Kim, Yon Mi
    • Korean Journal of Cognitive Science
    • /
    • v.27 no.2
    • /
    • pp.159-219
    • /
    • 2016
  • Mathematical ability is important for academic achievement and technological renovations in the STEM disciplines. This study concentrated on the relationship between neural basis of mathematical cognition and its mechanisms. These cognitive functions include domain specific abilities such as numerical skills and visuospatial abilities, as well as domain general abilities which include language, long term memory, and working memory capacity. Individuals can perform higher cognitive functions such as abstract thinking and reasoning based on these basic cognitive functions. The next topic covered in this study is about individual differences in mathematical abilities. Neural efficiency theory was incorporated in this study to view mathematical talent. According to the theory, a person with mathematical talent uses his or her brain more efficiently than the effortful endeavour of the average human being. Mathematically gifted students show different brain activities when compared to average students. Interhemispheric and intrahemispheric connectivities are enhanced in those students, particularly in the right brain along fronto-parietal longitudinal fasciculus. The third topic deals with growth and development in mathematical capacity. As individuals mature, practice mathematical skills, and gain knowledge, such changes are reflected in cortical activation, which include changes in the activation level, redistribution, and reorganization in the supporting cortex. Among these, reorganization can be related to neural plasticity. Neural plasticity was observed in professional mathematicians and children with mathematical learning disabilities. Last topic is about mathematical creativity viewed from Neural Darwinism. When the brain is faced with a novel problem, it needs to collect all of the necessary concepts(knowledge) from long term memory, make multitudes of connections, and test which ones have the highest probability in helping solve the unusual problem. Having followed the above brain modifying steps, once the brain finally finds the correct response to the novel problem, the final response comes as a form of inspiration. For a novice, the first step of acquisition of knowledge structure is the most important. However, as expertise increases, the latter two stages of making connections and selection become more important.

Intelligent Optimal Route Planning Based on Context Awareness (상황인식 기반 지능형 최적 경로계획)

  • Lee, Hyun-Jung;Chang, Yong-Sik
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.117-137
    • /
    • 2009
  • Recently, intelligent traffic information systems have enabled people to forecast traffic conditions before hitting the road. These convenient systems operate on the basis of data reflecting current road and traffic conditions as well as distance-based data between locations. Thanks to the rapid development of ubiquitous computing, tremendous context data have become readily available making vehicle route planning easier than ever. Previous research in relation to optimization of vehicle route planning merely focused on finding the optimal distance between locations. Contexts reflecting the road and traffic conditions were then not seriously treated as a way to resolve the optimal routing problems based on distance-based route planning, because this kind of information does not have much significant impact on traffic routing until a a complex traffic situation arises. Further, it was also not easy to take into full account the traffic contexts for resolving optimal routing problems because predicting the dynamic traffic situations was regarded a daunting task. However, with rapid increase in traffic complexity the importance of developing contexts reflecting data related to moving costs has emerged. Hence, this research proposes a framework designed to resolve an optimal route planning problem by taking full account of additional moving cost such as road traffic cost and weather cost, among others. Recent technological development particularly in the ubiquitous computing environment has facilitated the collection of such data. This framework is based on the contexts of time, traffic, and environment, which addresses the following issues. First, we clarify and classify the diverse contexts that affect a vehicle's velocity and estimates the optimization of moving cost based on dynamic programming that accounts for the context cost according to the variance of contexts. Second, the velocity reduction rate is applied to find the optimal route (shortest path) using the context data on the current traffic condition. The velocity reduction rate infers to the degree of possible velocity including moving vehicles' considerable road and traffic contexts, indicating the statistical or experimental data. Knowledge generated in this papercan be referenced by several organizations which deal with road and traffic data. Third, in experimentation, we evaluate the effectiveness of the proposed context-based optimal route (shortest path) between locations by comparing it to the previously used distance-based shortest path. A vehicles' optimal route might change due to its diverse velocity caused by unexpected but potential dynamic situations depending on the road condition. This study includes such context variables as 'road congestion', 'work', 'accident', and 'weather' which can alter the traffic condition. The contexts can affect moving vehicle's velocity on the road. Since these context variables except for 'weather' are related to road conditions, relevant data were provided by the Korea Expressway Corporation. The 'weather'-related data were attained from the Korea Meteorological Administration. The aware contexts are classified contexts causing reduction of vehicles' velocity which determines the velocity reduction rate. To find the optimal route (shortest path), we introduced the velocity reduction rate in the context for calculating a vehicle's velocity reflecting composite contexts when one event synchronizes with another. We then proposed a context-based optimal route (shortest path) algorithm based on the dynamic programming. The algorithm is composed of three steps. In the first initialization step, departure and destination locations are given, and the path step is initialized as 0. In the second step, moving costs including composite contexts into account between locations on path are estimated using the velocity reduction rate by context as increasing path steps. In the third step, the optimal route (shortest path) is retrieved through back-tracking. In the provided research model, we designed a framework to account for context awareness, moving cost estimation (taking both composite and single contexts into account), and optimal route (shortest path) algorithm (based on dynamic programming). Through illustrative experimentation using the Wilcoxon signed rank test, we proved that context-based route planning is much more effective than distance-based route planning., In addition, we found that the optimal solution (shortest paths) through the distance-based route planning might not be optimized in real situation because road condition is very dynamic and unpredictable while affecting most vehicles' moving costs. For further study, while more information is needed for a more accurate estimation of moving vehicles' costs, this study still stands viable in the applications to reduce moving costs by effective route planning. For instance, it could be applied to deliverers' decision making to enhance their decision satisfaction when they meet unpredictable dynamic situations in moving vehicles on the road. Overall, we conclude that taking into account the contexts as a part of costs is a meaningful and sensible approach to in resolving the optimal route problem.

An Analysis of the Dynamics between Media Coverage and Stock Market on Digital New Deal Policy: Focusing on Companies Related to the Fourth Industrial Revolution (디지털 뉴딜 정책에 대한 언론 보도량과 주식 시장의 동태적 관계 분석: 4차산업혁명 관련 기업을 중심으로)

  • Sohn, Kwonsang;Kwon, Ohbyung
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.33-53
    • /
    • 2021
  • In the crossroads of social change caused by the spread of the Fourth Industrial Revolution and the prolonged COVID-19, the Korean government announced the Digital New Deal policy on July 14, 2020. The Digital New Deal policy's primary goal is to create new businesses by accelerating digital transformation in the public sector and industries around data, networks, and artificial intelligence technologies. However, in a rapidly changing social environment, information asymmetry of the future benefits of technology can cause differences in the public's ability to analyze the direction and effectiveness of policies, resulting in uncertainty about the practical effects of policies. On the other hand, the media leads the formation of discourse through communicators' role to disseminate government policies to the public and provides knowledge about specific issues through the news. In other words, as the media coverage of a particular policy increases, the issue concentration increases, which also affects public decision-making. Therefore, the purpose of this study is to verify the dynamic relationship between the media coverage and the stock market on the Korean government's digital New Deal policy using Granger causality, impulse response functions, and variance decomposition analysis. To this end, the daily stock turnover ratio, daily price-earnings ratio, and EWMA volatility of digital technology-based companies related to the digital new deal policy among KOSDAQ listed companies were set as variables. As a result, keyword search volume, daily stock turnover ratio, EWMA volatility have a bi-directional Granger causal relationship with media coverage. And an increase in media coverage has a high impact on keyword search volume on digital new deal policies. Also, the impulse response analysis on media coverage showed a sharp drop in EWMA volatility. The influence gradually increased over time and played a role in mitigating stock market volatility. Based on this study's findings, the amount of media coverage of digital new deals policy has a significant dynamic relationship with the stock market.