• Title/Summary/Keyword: 정보관리시스템

Search Result 16,300, Processing Time 0.043 seconds

Structural features and Diffusion Patterns of Gartner Hype Cycle for Artificial Intelligence using Social Network analysis (인공지능 기술에 관한 가트너 하이프사이클의 네트워크 집단구조 특성 및 확산패턴에 관한 연구)

  • Shin, Sunah;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.107-129
    • /
    • 2022
  • It is important to preempt new technology because the technology competition is getting much tougher. Stakeholders conduct exploration activities continuously for new technology preoccupancy at the right time. Gartner's Hype Cycle has significant implications for stakeholders. The Hype Cycle is a expectation graph for new technologies which is combining the technology life cycle (S-curve) with the Hype Level. Stakeholders such as R&D investor, CTO(Chef of Technology Officer) and technical personnel are very interested in Gartner's Hype Cycle for new technologies. Because high expectation for new technologies can bring opportunities to maintain investment by securing the legitimacy of R&D investment. However, contrary to the high interest of the industry, the preceding researches faced with limitations aspect of empirical method and source data(news, academic papers, search traffic, patent etc.). In this study, we focused on two research questions. The first research question was 'Is there a difference in the characteristics of the network structure at each stage of the hype cycle?'. To confirm the first research question, the structural characteristics of each stage were confirmed through the component cohesion size. The second research question is 'Is there a pattern of diffusion at each stage of the hype cycle?'. This research question was to be solved through centralization index and network density. The centralization index is a concept of variance, and a higher centralization index means that a small number of nodes are centered in the network. Concentration of a small number of nodes means a star network structure. In the network structure, the star network structure is a centralized structure and shows better diffusion performance than a decentralized network (circle structure). Because the nodes which are the center of information transfer can judge useful information and deliver it to other nodes the fastest. So we confirmed the out-degree centralization index and in-degree centralization index for each stage. For this purpose, we confirmed the structural features of the community and the expectation diffusion patterns using Social Network Serice(SNS) data in 'Gartner Hype Cycle for Artificial Intelligence, 2021'. Twitter data for 30 technologies (excluding four technologies) listed in 'Gartner Hype Cycle for Artificial Intelligence, 2021' were analyzed. Analysis was performed using R program (4.1.1 ver) and Cyram Netminer. From October 31, 2021 to November 9, 2021, 6,766 tweets were searched through the Twitter API, and converting the relationship user's tweet(Source) and user's retweets (Target). As a result, 4,124 edgelists were analyzed. As a reult of the study, we confirmed the structural features and diffusion patterns through analyze the component cohesion size and degree centralization and density. Through this study, we confirmed that the groups of each stage increased number of components as time passed and the density decreased. Also 'Innovation Trigger' which is a group interested in new technologies as a early adopter in the innovation diffusion theory had high out-degree centralization index and the others had higher in-degree centralization index than out-degree. It can be inferred that 'Innovation Trigger' group has the biggest influence, and the diffusion will gradually slow down from the subsequent groups. In this study, network analysis was conducted using social network service data unlike methods of the precedent researches. This is significant in that it provided an idea to expand the method of analysis when analyzing Gartner's hype cycle in the future. In addition, the fact that the innovation diffusion theory was applied to the Gartner's hype cycle's stage in artificial intelligence can be evaluated positively because the Gartner hype cycle has been repeatedly discussed as a theoretical weakness. Also it is expected that this study will provide a new perspective on decision-making on technology investment to stakeholdes.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Mapping and Assessment of Forest Biomass Resources in Korea (우리나라 산림 바이오매스 자원량 평가 및 지도화)

  • Son, Yeong Mo;Lee, Sun Jeoung;Kim, Sowon;Hwang, Jeong Sun;Kim, Raehyun;Park, Hyun
    • Journal of Korean Society of Forest Science
    • /
    • v.103 no.3
    • /
    • pp.431-438
    • /
    • 2014
  • This study was conducted to assess forest biomass resource which is a carbon sink and a renewable resource in Korea. The total forest biomass resource potential was 804 million tons, and conifers, broadleaved forest and mixed forest accounted for 265 million tons, 282 million tons, and 257 million tons, respectively. Proportionately to regional forest stocks, biomass potential of Gangwon-do had most biomass potential, followed by Gyeongsangbuk-do and Gyeongsangnam-do. The woody biomass from the byproduct of sawn timber in commercial harvesting was 707 thousand ton/year, and that from the byproduct of forest tending was 592 thousand ton/year. The amount resulted in about 1,300 thousand ton/year of potential supplies from forest biomass resource into the energy market. It's tonnage of oil equivalent(toe) was 585 thousand ton/year. In this study, we developed a program (BiomassMap V2.0) for forest biomass resource mapping. Used system to develop this program was Microsoft Office Excel, Microsoft Office Access ArcGIS and Microsoft Visual Basic 6.0. Additionally, This program made use of tool such as ESRI MapObjects2.1 in order to take advantage of spatial information. This program shows the map of total biomass stock, annual biomass growth at forest land in Korea, and biomass production from forest tending and commercial harvesting. The information can also be managed by the program. The biomass resource map can be identified by regional and forest type for the purpose of utilization. So, we expect the map and program to be very useful for forest managers in the near future.

A Study on Reduction Effect of Processing Wastewater by Introduction of PACS (PACS 도입에 의한 현상시스템 폐수 감소효과에 관한 연구)

  • Ko, Shin-Kwan;Han, Dong-Kyoon;Kim, Wook-Dong;Kang, Bung-Sam;Yang, Han-Jun
    • Journal of radiological science and technology
    • /
    • v.30 no.2
    • /
    • pp.167-175
    • /
    • 2007
  • There are some positive effects by the introduction of PACS(Picture Archiving Communication System). This study is to analyze the mutual relation between before and after of the introduction of PACS in terms of the environment effect. It is supposed to cause the reduction of developing and fixing wastewater according to the increase in the rate of a non-film. This study will also show the amount of wastewater. Target places were the department of image medicine(diagnostic radiation) of the general hospitals in Seoul and Gyeonggi-Do, which are equiped with full PACS. The authors examined questionnaires on the number of projection, the number of indirect projection, the amount of the film used, the number of radiation image CD loan, the amount of the developing and fixing solution used, the change of the amount of fixing wastewater. According to the analysis, we analyzed the change of the amount of developing and fixing solution per a film and the change of the amount of developing and fixing wastewater which is supposed to be reduced proportionally by the introduction of PACS. We got conclusion as below after analyzing 8 hospitals except the largest and the least amount of examination, film used, developing and fixing solution and the amount of developing and fixing wastewater in order to decrease the deviation from 10 general hospitals located in Seoul and Gyeonggi-Do. We compared data one year before adopting PACS Versus 3 years after adopting PACS. 1. The frequences of examination increased to 7,357.7 cases per month but the amount of film used decreased to 90%, from 42,774.4 to 4,181.88 after adopting the PACS. 2. 3 years after adopting PACS, monthly average amount of developing solution used decreased to 92% and the monthly average amount of fixing solution decreased to 86%. 3. Monthly average amount of developing solution used per film increased to 1.49 times and fixing solution increased as much as three times. 4. Monthly average wastewater for developing decreased to 88% and wastewater for fixing decreased up to 87%. 5. Monthly average wastewater for developing per film increased to 3.77 times and wastewater for fixing increased to 3.85 times. Although the amount of film used and the amount of developing and fixing wastewater affected by the reduction of the developing and fixing solution became less on the whole by introduction of PACS, they did not decrease proportionally. Moreover the amount of the developing and fixing solution used and the amount of developing and fixing wastewater per a film increased. That means the expectation for an environmental improvement differs from the actual condition.

  • PDF

Analyzing the Effect of Online media on Overseas Travels: A Case study of Asian 5 countries (해외 출국에 영향을 미치는 온라인 미디어 효과 분석: 아시아 5개국을 중심으로)

  • Lee, Hea In;Moon, Hyun Sil;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.53-74
    • /
    • 2018
  • Since South Korea has an economic structure that has a characteristic which market-dependent on overseas, the tourism industry is considered as a very important industry for the national economy, such as improving the country's balance of payments or providing income and employment increases. Accordingly, the necessity of more accurate forecasting on the demand in the tourism industry has been raised to promote its industry. In the related research, economic variables such as exchange rate and income have been used as variables influencing tourism demand. As information technology has been widely used, some researchers have also analyzed the effect of media on tourism demand. It has shown that the media has a considerable influence on traveler's decision making, such as choosing an outbound destination. Furthermore, with the recent availability of online information searches to obtain the latest information and two-way communication in social media, it is possible to obtain up-to-date information on travel more quickly than before. The information in online media such as blogs can naturally create the Word-of-Mouth effect by sharing useful information, which is called eWOM. Like all other service industries, the tourism industry is characterized by difficulty in evaluating its values before it is experienced directly. And furthermore, most of the travelers tend to search for more information in advance from various sources to reduce the perceived risk to the destination, so they can also be influenced by online media such as online news. In this study, we suggested that the number of online media posting, which causes the effects of Word-of-Mouth, may have an effect on the number of outbound travelers. We divided online media into public media and private media according to their characteristics and selected online news as public media and blog as private media, one of the most popular social media in tourist information. Based on the previous studies about the eWOM effects on online news and blog, we analyzed a relationship between the volume of eWOM and the outbound tourism demand through the panel model. To this end, we collected data on the number of national outbound travelers from 2007 to 2015 provided by the Korea Tourism Organization. According to statistics, the highest number of outbound tourism demand in Korea are China, Japan, Thailand, Hong Kong and the Philippines, which are selected as a dependent variable in this study. In order to measure the volume of eWOM, we collected online news and blog postings for the same period as the number of outbound travelers in Naver, which is the largest portal site in South Korea. In this study, a panel model was established to analyze the effect of online media on the demand of Korean outbound travelers and to identify that there was a significant difference in the influence of online media by each time and countries. The results of this study can be summarized as follows. First, the impact of the online news and blog eWOM on the number of outbound travelers was significant. We found that the number of online news and blog posting have an influence on the number of outbound travelers, especially the experimental result suggests that both the month that includes the departure date and the three months before the departure were found to have an effect. It is shown that online news and blog are online media that have a significant influence on outbound tourism demand. Next, we found that the increased volume of eWOM in online news has a negative effect on departure, while the increase in a blog has a positive effect. The result with the country-specific models would be the same. This paper shows that online media can be used as a new variable in tourism demand by examining the influence of the eWOM effect of the online media. Also, we found that both social media and news media have an important role in predicting and managing the Korean tourism demand and that the influence of those two media appears different depending on the country.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Analysis of PM2.5 Distribution Contribution using GIS Spatial Interpolation - Focused on Changwon-si Urban Area - (GIS 공간내삽법을 활용한 PM2.5 분포 특성 분석 - 창원시 도시지역을 대상으로 -)

  • MUN, Han-Sol;SONG, Bong-Geun;SEO, Kyeong-Ho;KIM, Tae-Hyeung;PARK, Kyung-Hun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.2
    • /
    • pp.1-20
    • /
    • 2020
  • The purpose of this study was to analyze the distribution characteristics of spatial and temporal PM2.5 in urban areas of Changwon-si, and to identify the causes of PM2.5 by comparing the characteristics of land-use, and to suggest the direction of reduction measures. As the basic data, the every hour average from September 2017 to August 2018 of Airpro data, which has measurement points in kindergartens, elementary schools, and some middle and high schools in Changwon-si was used. Also, by using IDW method among spatial interpolation methods of GIS, monthly and time-slot distribution maps were constructed, and based on this, spatial and temporal PM2.5 distribution characteristics were confirmed. First, to verify the accuracy of the Airpro data, the correlation with AirKorea data managed by the Ministry of Environment was confirmed. As a result of the analysis, R2 was 0.75~0.86, showing a very high correlation and the data was judged that it was suitable for the study. In the monthly analysis, January was the highest year, and August was the lowest. As a result of analysis by time-slot, The clock-in time at 06-09 was the highest, and the activity time at 09-18 was the lowest. By administrative district, Sangnam-dong, Happo-dong, and Myeonggok-dong were the most severe regions of PM2.5 and Hoeseong-dong was the lowest. As a result of analyzing the land-use characteristics by administrative area, it was confirmed that the ratio of traffic area and commercial area is high in the serious area of PM2.5. In conclusion, the results of this study will be used as basic data to grasp the characteristics of PM2.5 distribution in Changwon-si. Also, it is thought that the severe regions and the direction of establishing reduction measures derived from this study can be used to prepare more effective policies than before.

Understanding the Legal Structure of German Human Gene Testing Act (GenDG) (독일 유전자검사법의 규율 구조 이해 - 의료 목적 유전자검사의 문제를 중심으로 -)

  • Kim, Na-Kyoung
    • The Korean Society of Law and Medicine
    • /
    • v.17 no.2
    • /
    • pp.85-124
    • /
    • 2016
  • The Human gene testing act (GenDG) in Germany starts from the characteristic features of gene testing, i.e. dualisting structure consisted of anlaysis on the one side and the interpretation on the other side. The linguistic distincion of 'testing', 'anlaysis' and 'judgment' in the act is a fine example. Another important basis of the regulation is the ideological purpose of the law, that is information autonomy. The normative texts as such and the founding principle are the basis of the classification of testing types. Especially in the case of gene testing for medical purpose is classified into testing for diagnostic purpose and predictive purpose. However, those two types are not always clearly differentiated because the predictive value of testing is common in both types. In the legal regulation of gene testing it is therefore important to manage the uncertainty and subjectivity which are inherent in the gene-analysis and the judgment. In GenDG the system ensuring the quality of analysis is set up and GEKO(Commity for gene tisting) based on the section 23 of GenDG concretes the criterium of validity through guidelines. It is also very important in the case of gene testing for medical purpose to set up the system for ensurement of procedural rationality of the interpretation. The interpretation of the results of analysis has a wide spectrum because of the consistent development of technology on the one side and different understandings of different subjects who performs gene testings. Therefore the process should include the communication process for patients in oder that he or she could understand the meaning of gene testing and make plans of life. In GenDG the process of genetic counselling and GEKO concretes the regulation very precisely. The regulation as such in GenDG seems to be very suggestive to Korean legal polic concerning the gene testing.

  • PDF

Resolving the 'Gray sheep' Problem Using Social Network Analysis (SNA) in Collaborative Filtering (CF) Recommender Systems (소셜 네트워크 분석 기법을 활용한 협업필터링의 특이취향 사용자(Gray Sheep) 문제 해결)

  • Kim, Minsung;Im, Il
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.137-148
    • /
    • 2014
  • Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used

    . Past studies to improve CF performance typically used additional information other than users' evaluations such as demographic data. Some studies applied SNA techniques as a new similarity metric. This study is novel in that it used SNA to separate dataset. This study shows that performance of CF can be improved, without any additional information, when SNA techniques are used as proposed. This study has several theoretical and practical implications. This study empirically shows that the characteristics of dataset can affect the performance of CF recommender systems. This helps researchers understand factors affecting performance of CF. This study also opens a door for future studies in the area of applying SNA to CF to analyze characteristics of dataset. In practice, this study provides guidelines to improve performance of CF recommender systems with a simple modification.

  • An Empirical Study on Influencing Factors of Switching Intention from Online Shopping to Webrooming (온라인 쇼핑에서 웹루밍으로의 쇼핑전환 의도에 영향을 미치는 요인에 대한 연구)

    • Choi, Hyun-Seung;Yang, Sung-Byung
      • Journal of Intelligence and Information Systems
      • /
      • v.22 no.1
      • /
      • pp.19-41
      • /
      • 2016
    • Recently, the proliferation of mobile devices such as smartphones and tablet personal computers and the development of information communication technologies (ICT) have led to a big trend of a shift from single-channel shopping to multi-channel shopping. With the emergence of a "smart" group of consumers who want to shop in more reasonable and convenient ways, the boundaries apparently dividing online and offline shopping have collapsed and blurred more than ever before. Thus, there is now fierce competition between online and offline channels. Ever since the emergence of online shopping, a major type of multi-channel shopping has been "showrooming," where consumers visit offline stores to examine products before buying them online. However, because of the growing use of smart devices and the counterattack of offline retailers represented by omni-channel marketing strategies, one of the latest huge trends of shopping is "webrooming," where consumers visit online stores to examine products before buying them offline. This has become a threat to online retailers. In this situation, although it is very important to examine the influencing factors for switching from online shopping to webrooming, most prior studies have mainly focused on a single- or multi-channel shopping pattern. Therefore, this study thoroughly investigated the influencing factors on customers switching from online shopping to webrooming in terms of both the "search" and "purchase" processes through the application of a push-pull-mooring (PPM) framework. In order to test the research model, 280 individual samples were gathered from undergraduate and graduate students who had actual experience with webrooming. The results of the structural equation model (SEM) test revealed that the "pull" effect is strongest on the webrooming intention rather than the "push" or "mooring" effects. This proves a significant relationship between "attractiveness of webrooming" and "webrooming intention." In addition, the results showed that both the "perceived risk of online search" and "perceived risk of online purchase" significantly affect "distrust of online shopping." Similarly, both "perceived benefit of multi-channel search" and "perceived benefit of offline purchase" were found to have significant effects on "attractiveness of webrooming" were also found. Furthermore, the results indicated that "online purchase habit" is the only influencing factor that leads to "online shopping lock-in." The theoretical implications of the study are as follows. First, by examining the multi-channel shopping phenomenon from the perspective of "shopping switching" from online shopping to webrooming, this study complements the limits of the "channel switching" perspective, represented by multi-channel freeriding studies that merely focused on customers' channel switching behaviors from one to another. While extant studies with a channel switching perspective have focused on only one type of multi-channel shopping, where consumers just move from one particular channel to different channels, a study with a shopping switching perspective has the advantage of comprehensively investigating how consumers choose and navigate among diverse types of single- or multi-channel shopping alternatives. In this study, only limited shopping switching behavior from online shopping to webrooming was examined; however, the results should explain various phenomena in a more comprehensive manner from the perspective of shopping switching. Second, this study extends the scope of application of the push-pull-mooring framework, which is quite commonly used in marketing research to explain consumers' product switching behaviors. Through the application of this framework, it is hoped that more diverse shopping switching behaviors can be examined in future research. This study can serve a stepping stone for future studies. One of the most important practical implications of the study is that it may help single- and multi-channel retailers develop more specific customer strategies by revealing the influencing factors of webrooming intention from online shopping. For example, online single-channel retailers can ease the distrust of online shopping to prevent consumers from churning by reducing the perceived risk in terms of online search and purchase. On the other hand, offline retailers can develop specific strategies to increase the attractiveness of webrooming by letting customers perceive the benefits of multi-channel search or offline purchase. Although this study focused only on customers switching from online shopping to webrooming, the results can be expanded to various types of shopping switching behaviors embedded in single- and multi-channel shopping environments, such as showrooming and mobile shopping.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.