• Title/Summary/Keyword: online-order

Search Result 1,704, Processing Time 0.031 seconds

A Study on Perceived Quality affecting the Service Personal Value in the On-off line Channel - Focusing on the moderate effect of the need for cognition - (온.오프라인 채널에서 지각된 품질이 서비스의 개인가치에 미치는 영향에 관한 연구 -인지욕구의 조정효과를 중심으로-)

  • Sung, Hyung-Suk
    • Journal of Distribution Research
    • /
    • v.15 no.3
    • /
    • pp.111-137
    • /
    • 2010
  • The basic purpose of this study is to investigate perceived quality and service personal value affecting the result of long-term relationship between service buyers and suppliers. This research presented a constructive model(perceived quality affecting the service personal value and the moderate effect of NFC) in the on off line and then propose the research model base on prior researches and studies about relationships among components of service. Data were gathered from respondents who visit at the education service market. For this study, Data were analyzed by AMOS 7.0. We integrate the literature on services marketing with researches on personal values and perceived quality. The SERPVAL scale presented here allows for the creation of a common ground for assessing service personal values, giving a clear understanding of the key value dimensions behind service choice and usage. It will lead to a focus of future research in services marketing, extending knowledge in the field and stimulating further empirical research on service personal values. At the managerial level, as a tool the SERPVAL scale should allow practitioners to evaluate and improve the value of a service, and consequently, to define strategies and actions to address services for customers based on their fundamental personal values. Through qualitative and empirical research, we find that the service quality construct conforms to the structure of a second-order factor model that ties service quality perceptions to distinct and actionable dimensions: outcome, interaction, and environmental quality. In turn, each has two subdimensions that define the basis of service quality perceptions. The authors further suggest that for each of these subdimensions to contribute to improved service quality perceptions, the quality received by consumers must be perceived to be reliable, responsive, and empathetic. Although the service personal value may be found in researches that explore individual values and their consequences for consumer behavior, there is no established operationalization of a SERPVAL scale. The inexistence of an established scale, duly adapted in order to understand and analyze personal values behind services usage, exposes the need of a measurement scale with such a purpose. This need has to be rooted, however, in a conceptualization of the construct being scaled. Service personal values can be defined as a customer's overall assessment of the use of a service based on the perception of what is achieved in terms of his own personal values. As consumer behaviors serve to show an individual's values, the use of a service can also be a way to fulfill and demonstrate consumers'personal values. In this sense, a service can provide more to the customer than its concrete and abstract attributes at both the attribute and the quality levels, and more than its functional consequences at the value level. Both values and services literatures agree, that personal value is the highest-level concept, followed by instrumental values, attitudes and finally by product attributes. Purchasing behaviors are agreed to be the end result of these concepts' interaction, with personal values taking a major role in the final decision process. From both consumers' and practitioners' perspectives, values are extremely relevant, as they are desirable goals that serve as guiding principles in people's lives. While building on previous research, we propose to assess service personal values through three broad groups of individual dimensions; at the self-oriented level, we use (1) service value to peaceful life (SVPL) and, at the social-oriented level, we use (2) service value to social recognition (SVSR), and (3) service value to social integration (SVSI). Service value to peaceful life is our first dimension. This dimension emerged as a combination of values coming from the RVS scale, a scale built specifically to assess general individual values. If a service promotes a pleasurable life, brings or improves tranquility, safety and harmony, then its user recognizes the value of this service. Generally, this service can improve the user's pleasure of life, since it protects or defends the consumer from threats to life or pressures on it. While building upon both the LOV scale, a scale built specifically to assess consumer values, and the RVS scale for individual values, we develop the other two dimensions: SVSR and SVSI. The roles of social recognition and social integration to improve service personal value have been seriously neglected. Social recognition derives its outcome utility from its predictive utility. When applying this underlying belief to our second dimension, SVSR, we assume that people use a service while taking into consideration the content of what is delivered. Individuals consider whether the service aids in gaining respect from others, social recognition and status, as well as whether it allows achieving a more fulfilled and stimulating life, which might then be revealed to others. People also tend to engage in behavior that receives social recognition and to avoid behavior that leads to social disapproval, and this contributes to an individual's social integration. This leads us to the third dimension, SVSI, which is based on the fact that if the consumer perceives that a service strengthens friendships, provides the possibility of becoming more integrated in the group, or promotes better relationships at the social, professional or family levels, then the service will contribute to social integration, and naturally the individual will recognize personal value in the service. Most of the research in business values deals with individual values. However, to our knowledge, no study has dealt with assessing overall personal values as well as their dimensions in a service context. Our final results show that the scales adapted from the Schwartz list were excluded. A possible explanation is that although Schwartz builds on Rokeach work in order to explore individual values, its dimensions might be especially focused on analyzing societal values. As we are looking for individual dimensions, this might explain why the values inspired by the Schwartz list were excluded from the model. The hierarchical structure of the final scale presented in this paper also presents theoretical implications. Although we cannot claim to definitively capture the dimensions of service personal values, we believe that we come close to capturing these overall evaluations because the second-order factor extracts the underlying commonality among dimensions. In addition to obtaining respondents' evaluations of the dimensions, the second-order factor model captures the common variance among these dimensions, reflecting the respondents' overall assessment of service personal values. Towards this fact, we expect that the service personal values conceptualization and measurement scale presented here contributes to both business values literature and the service marketing field, allowing for the delineation of strategies for adding value to services. This new scale also presents managerial implications. The SERPVAL dimensions give some guidance on how to better pursue a highly service-oriented business strategy. Indeed, the SERPVAL scale can be used for benchmarking purposes, as this scale can be used to identify whether or not a firms' marketing strategies are consistent with consumers' expectations. Managerial assessment of the personal values of a service might be extremely important because it allows managers to better understand what customers want or value. Thus, this scale allows us to identify what services are really valuable to the final consumer; providing knowledge for making choices regarding which services to include. Traditional approaches have focused their attention on service attributes (as quality) and service consequences(as service value), but personal values may be an important set of variables to be considered in understanding what attracts consumers to a certain service. By using the SERPVAL scale to assess the personal values associated with a services usage, managers may better understand the reasons behind services' usage, so that they may handle them more efficiently. While testing nomological validity, our empirical findings demonstrate that the three SERPVAL dimensions are positively and significantly associated with satisfaction. Additionally, while service value to social integration is related only with loyalty, service value to peaceful life is associated with both loyalty and repurchase intent. It is also interesting and surprising that service value to social recognition appears not to be significantly linked with loyalty and repurchase intent. A possible explanation is that no mobile service provider has yet emerged in the market as a luxury provider. All of the Portuguese providers are still trying to capture market share by means of low-end pricing. This research has implications for consumers as well. As more companies seek to build relationships with their customers, consumers are easily able to examine whether these relationships provide real value or not to their own lives. The selection of a strategy for a particular service depends on its customers' personal values. Being highly customer-oriented means having a strong commitment to customers, trying to create customer value and understanding customer needs. Enhancing service distinctiveness in order to provide a peaceful life, increase social recognition and gain a better social integration are all possible strategies that companies may pursue, but the one to pursue depends on the outstanding personal values held by the service customers. Data were gathered from 284 respondents in the korean discount store and online shopping mall market. This research proposed 3 hypotheses on 6 latent variables and tested through structural equation modeling. 6 alternative measurements were compared through statistical significance test of the 6 paths of research model and the overall fitting level of structural equation model. and the result was successful. and Perceived quality more positively influences service personal value when NFC is high than when no NFC is low in the off-line market. The results of the study indicate that service quality is properly modeled as an antecedent of service personal value. We consider the research and managerial implications of the study and its limitations. In sum, by knowing the dimensions a consumer takes into account when choosing a service, a better understanding of purchasing behaviors may be realized, guiding managers toward customers expectations. By defining strategies and actions that address potential problems with the service personal values, managers might ultimately influence their firm's performance. we expect to contribute to both business values and service marketing literatures through the development of the service personal value. At a time when marketing researchers are challenged to provide research with practical implications, it is also believed that this framework may be used by managers to pursue service-oriented business strategies while taking into consideration what customers value.

  • PDF

A Study on Intelligent Value Chain Network System based on Firms' Information (기업정보 기반 지능형 밸류체인 네트워크 시스템에 관한 연구)

  • Sung, Tae-Eung;Kim, Kang-Hoe;Moon, Young-Su;Lee, Ho-Shin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.67-88
    • /
    • 2018
  • Until recently, as we recognize the significance of sustainable growth and competitiveness of small-and-medium sized enterprises (SMEs), governmental support for tangible resources such as R&D, manpower, funds, etc. has been mainly provided. However, it is also true that the inefficiency of support systems such as underestimated or redundant support has been raised because there exist conflicting policies in terms of appropriateness, effectiveness and efficiency of business support. From the perspective of the government or a company, we believe that due to limited resources of SMEs technology development and capacity enhancement through collaboration with external sources is the basis for creating competitive advantage for companies, and also emphasize value creation activities for it. This is why value chain network analysis is necessary in order to analyze inter-company deal relationships from a series of value chains and visualize results through establishing knowledge ecosystems at the corporate level. There exist Technology Opportunity Discovery (TOD) system that provides information on relevant products or technology status of companies with patents through retrievals over patent, product, or company name, CRETOP and KISLINE which both allow to view company (financial) information and credit information, but there exists no online system that provides a list of similar (competitive) companies based on the analysis of value chain network or information on potential clients or demanders that can have business deals in future. Therefore, we focus on the "Value Chain Network System (VCNS)", a support partner for planning the corporate business strategy developed and managed by KISTI, and investigate the types of embedded network-based analysis modules, databases (D/Bs) to support them, and how to utilize the system efficiently. Further we explore the function of network visualization in intelligent value chain analysis system which becomes the core information to understand industrial structure ystem and to develop a company's new product development. In order for a company to have the competitive superiority over other companies, it is necessary to identify who are the competitors with patents or products currently being produced, and searching for similar companies or competitors by each type of industry is the key to securing competitiveness in the commercialization of the target company. In addition, transaction information, which becomes business activity between companies, plays an important role in providing information regarding potential customers when both parties enter similar fields together. Identifying a competitor at the enterprise or industry level by using a network map based on such inter-company sales information can be implemented as a core module of value chain analysis. The Value Chain Network System (VCNS) combines the concepts of value chain and industrial structure analysis with corporate information simply collected to date, so that it can grasp not only the market competition situation of individual companies but also the value chain relationship of a specific industry. Especially, it can be useful as an information analysis tool at the corporate level such as identification of industry structure, identification of competitor trends, analysis of competitors, locating suppliers (sellers) and demanders (buyers), industry trends by item, finding promising items, finding new entrants, finding core companies and items by value chain, and recognizing the patents with corresponding companies, etc. In addition, based on the objectivity and reliability of the analysis results from transaction deals information and financial data, it is expected that value chain network system will be utilized for various purposes such as information support for business evaluation, R&D decision support and mid-term or short-term demand forecasting, in particular to more than 15,000 member companies in Korea, employees in R&D service sectors government-funded research institutes and public organizations. In order to strengthen business competitiveness of companies, technology, patent and market information have been provided so far mainly by government agencies and private research-and-development service companies. This service has been presented in frames of patent analysis (mainly for rating, quantitative analysis) or market analysis (for market prediction and demand forecasting based on market reports). However, there was a limitation to solving the lack of information, which is one of the difficulties that firms in Korea often face in the stage of commercialization. In particular, it is much more difficult to obtain information about competitors and potential candidates. In this study, the real-time value chain analysis and visualization service module based on the proposed network map and the data in hands is compared with the expected market share, estimated sales volume, contact information (which implies potential suppliers for raw material / parts, and potential demanders for complete products / modules). In future research, we intend to carry out the in-depth research for further investigating the indices of competitive factors through participation of research subjects and newly developing competitive indices for competitors or substitute items, and to additively promoting with data mining techniques and algorithms for improving the performance of VCNS.

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Surrogate Internet Shopping Malls: The Effects of Consumers' Perceived Risk and Product Evaluations on Country-of-Buying-Origin Image (망상대구점(网上代购店): 소비자감지풍험화산품평개대원산국형상적영향(消费者感知风险和产品评价对原产国形象的影响))

  • Lee, Hyun-Joung;Shin, So-Hyoun;Kim, Sang-Uk
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.2
    • /
    • pp.208-218
    • /
    • 2010
  • Internet has grown fast and become one of the most important retail channels now. Various types of Internet retailers, hereafter etailers, have been introduced so far and as one type of Internet shopping mall, 'surrogate Internet shopping mall' has been prosperous and attracting consumers in the domestic market. Surrogate Internet shopping mall is a unique type of etailer that globally purchases well-known brand goods that are not imported in the market, completes delivery in the favor of individual buyers, and collects fees for these specific services. The consumers, who are usually interested in purchasing high-end and unique but not eligible brands, have difficulties to purchase these items overseas directly from the retailers or brands in other countries due to worries of payment failure and no address available for their usually domestic only delivery. In Korea, both numbers of surrogate Internet shopping malls and the magnitude of sales have been growing rapidly up to more than 430 active malls and 500 billion Korean won in 2008 since the population of consumers who want this agent shopping service is also expending. This etail business concept is originated from 'surrogate-mediated purchase' and this type of shopping agent has existed in many different forms and also in wide ranges of context level for quite a long time. As marketers face their individual buyers' representatives instead of a direct contact with them in many occasions, the impact of surrogate shoppers on consumer's decision making has been enormously important and many scholars have explored various range of agent's impact on consumer's purchase decisions in marketing and psychology field. However, not much rigorous research in the Internet commerce has been conveyed yet. Moreover, since as one of the shopping agent surrogate Internet shopping malls specifically connect overseas brands or retailers to domestic consumers, one specific character of the mall's, image of surrogate buying country, where surrogate purchases are conducted in, may play an important role to form consumers' attitude and purchase intention toward products. Furthermore it also possibly affects various dimensions of perceived risk in consumer's information processing. However, though tremendous researches have been carried exploring the effects of diverse dimensions of country of origin, related studies in Internet context has been rarely executed. There have been some studies that prove the positive impact of country of origin on consumer's evaluations as one of information clues in product manufacture descriptions, yet studies detecting the relationship between country image of surrogate buying origin and product evaluations rarely undertaken regarding this specific mall type. Thus, the authors have found it well-worth investigating in this specific retail channel and explored systematic relationships among focal constructs and elaborated their different paths. The authors have proven that country image of surrogate buying origin in the mall, where surrogate malls purchase products in and brings them from for buyers, not only has a positive effect on consumers' product evaluations including attitude and purchase intention but also has a negative effect on all three dimensions of perceived risk: product-related risk, shipping-related risk, and post-purchase risk. Specifically among all the perceived risk, product-related risk which is arisen from high uncertainty of product performance is most affected (${\beta}$= -.30) by negative country image of surrogate buying origin, and also shipping-related risk (${\beta}$= -.18) and post-purchase risk (${\beta}$= -.15) get influenced in order. Its direct effects on product attitude (${\beta}$= .10) and purchase intention (${\beta}$= .14) are also secured. Each of perceived risk dimension is proven to have a negative effect on purchase intention through product attitude as a mediator (${\beta}$= -.57: product-related risk ${\rightarrow}$ product attitude; ${\beta}$= -.24: shipping-related risk ${\rightarrow}$ product attitude; ${\beta}$= -.44: post-purchase risk ${\rightarrow}$ product attitude) as well. From the additional analysis, the paths of consumers' information processing are shown to be different based on their levels of product knowledge. While novice consumers with low level of knowledge consider only perceived risk important, expert consumers with high level of knowledge take both the country image, where surrogate services are conducted in, and perceived risk seriously to build their attitudes and formulate decisions toward products more delicately and systematically, which is in line with previous studies. This study suggests several pieces of academic and practical advice. Precisely, country image of surrogate buying origin does affect on consumer's risk perceptions and behavioral consequences. Therefore a careful selection of surrogate buying origin is recommended. Furthermore, reducing consumers' risk level is required to blossom this new type of retail business whether its consumer are novices or experts. Additionally, since consumer take different paths of elaborating information based on their knowledge levels, sophisticated marketing approaches to each group of consumers are required. For novice buyers strong devices for risk mitigation are needed to induce them to form better attitudes and for experts selections of better and advanced countries as surrogate buying origins are advised while endorsement strategy for the site might work as a reliable information clue to all consumers to mitigate the barriers to purchase goods online. The authors have also explained that the study suffers from some limitations, including generalizability. In future studies, tests of and comparisons among different types of etailers with relevant constructs are recommended to broaden the findings.

The Mediating Effect of Experiential Value on Customers' Perceived Value of Digital Content: China's Anti-virus Program Market (경험개치대소비자대전자내용적인지개치적중개영향(经验价值对消费者对电子内容的认知价值的中介影响): 중국살독연건시장(中国杀毒软件市场))

  • Jia, Weiwei;Kim, Sae-Bum
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.2
    • /
    • pp.219-230
    • /
    • 2010
  • Digital content makes big changes to our daily lives while bringing opportunities and challenges for companies. Creative firms integrate pictures, texts, videos, audios, and data by digitalization to develop new products or services and create digital experiences to promote their brands. Most articles on digital content contribute to the basic concept or development of marketing it in literature. Actually, compared with traditional value chains for common products or services, the digital content industry seems to have more potential value. Because quite a bit of digital content is free to the consumer, price is not necessarily perceived as an indicator of the quality or value of information (Rowley 2008). It becomes evident that a current theme in digital content is the issue of "value," and research on customers' perceived value of digital content is a necessity. This article argues that experiential value has an advantage in customers' evaluations of digital content. Two different but related contributions to the understanding of "value" of digital content are made here. First, based on the comparison of digital content with products and services, the article proposes two key characteristics that make experiential strategy available for digital content: intangibility and near-zero reproduction cost. On top of that, based on the discussion of the gap between company's idealized value and customer's perceived value, this article emphasizes that digital content prices and pricing of digital content is different from products and services. As a result of intangibility, prices may not reflect customer value. Moreover, the cost of digital content in the development stage may be very high while reproduction costs shrink dramatically. Moreover, because of the value gap mentioned before, the pricing polices vary for different digital contents. For example, flat price policy is generally used for movies and music (Magiera 2001; Netherby 2002), while for continuous demand, digital content such as online games and anti-virus programs involves a more complicated matter of utility and competitive price levels. Digital content companies have to explore various kinds of strategies to overcome this gap. Rethinking marketing solutions such as advertisements, images, and word-of-mouth and their effect on customers' perceived value becomes essential. China's digital content industry is becoming more and more globalized and drawing special attention from different countries and regions that have respective competitive advantages. The 2008-2009 Annual Report on the Development of China's Digital Content Industry (CCIDConsulting 2009) indicates that, with the driven power of domestic demand and governmental policy support, the country's digital content industry maintained a fast growth of some 30 percent in 2008, obviously indicating the initial stage of industry expansion. In China, anti-virus programs and other software programs which need to be updated use a quarter-based pricing policy. Customers can download a trial version for free and use it for six months or a year. If they want to use it longer, continuous payment is needed. They examine the excellence of the digital content during this trial period and decide whether to pay for continued usage. For China’s music and movie industries, as a result of initial development, experiential strategy has not been much applied, even though firms in other countries find the trial experience and explore important strategies(such as customers listening to music for several seconds for free before downloading it). For the above reasons, anti-virus program may be a representative for digital content industry in China and an exploratory study of the advantage of experiential value in customer's perceived value of digital content is done in the anti-virus market of China. In order to enhance the reliability of the survey data, this study focused on people who were experienced users of anti-virus programs. The empirical results revealed that experiential value has a positive effect on customers' perceived value of digital content. In other words, because digital content is intangible and the reproduction costs are nearly zero, customers' evaluations are based heavily on their experience. Moreover, image and word-of-mouth do not have a positive effect on perceived value, only on experiential value. That is to say, a digital content value chain is different from that of a general product or service. Experiential value has a notable advantage and mediates the effect of image and word-of-mouth on perceived value. The results of this study help provide an understanding of why free digital content downloads exist in developing countries. Customers can perceive the value of digital content only by using and experiencing it. This is also why such governments support the development of digital content. Other developing countries whose digital content business is also in the beginning stage can make use of the suggestions here. Moreover, based on the advantage of experiential strategy, companies should make more of an effort to invest in customers' experience. As a result of the characteristics and value gap of digital content, customers perceive more value in the intangible digital content only by experiencing what they really want. Moreover, because of the near-zero reproduction costs, companies can perhaps use experiential strategy to enhance customer understanding of digital content.

A Study on the Revitalization of Tourism Industry through Big Data Analysis (한국관광 실태조사 빅 데이터 분석을 통한 관광산업 활성화 방안 연구)

  • Lee, Jungmi;Liu, Meina;Lim, Gyoo Gun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.149-169
    • /
    • 2018
  • Korea is currently accumulating a large amount of data in public institutions based on the public data open policy and the "Government 3.0". Especially, a lot of data is accumulated in the tourism field. However, the academic discussions utilizing the tourism data are still limited. Moreover, the openness of the data of restaurants, hotels, and online tourism information, and how to use SNS Big Data in tourism are still limited. Therefore, utilization through tourism big data analysis is still low. In this paper, we tried to analyze influencing factors on foreign tourists' satisfaction in Korea through numerical data using data mining technique and R programming technique. In this study, we tried to find ways to revitalize the tourism industry by analyzing about 36,000 big data of the "Survey on the actual situation of foreign tourists from 2013 to 2015" surveyed by the Korea Culture & Tourism Research Institute. To do this, we analyzed the factors that have high influence on the 'Satisfaction', 'Revisit intention', and 'Recommendation' variables of foreign tourists. Furthermore, we analyzed the practical influences of the variables that are mentioned above. As a procedure of this study, we first integrated survey data of foreign tourists conducted by Korea Culture & Tourism Research Institute, which is stored in the tourist information system from 2013 to 2015, and eliminate unnecessary variables that are inconsistent with the research purpose among the integrated data. Some variables were modified to improve the accuracy of the analysis. And we analyzed the factors affecting the dependent variables by using data-mining methods: decision tree(C5.0, CART, CHAID, QUEST), artificial neural network, and logistic regression analysis of SPSS IBM Modeler 16.0. The seven variables that have the greatest effect on each dependent variable were derived. As a result of data analysis, it was found that seven major variables influencing 'overall satisfaction' were sightseeing spot attraction, food satisfaction, accommodation satisfaction, traffic satisfaction, guide service satisfaction, number of visiting places, and country. Variables that had a great influence appeared food satisfaction and sightseeing spot attraction. The seven variables that had the greatest influence on 'revisit intention' were the country, travel motivation, activity, food satisfaction, best activity, guide service satisfaction and sightseeing spot attraction. The most influential variables were food satisfaction and travel motivation for Korean style. Lastly, the seven variables that have the greatest influence on the 'recommendation intention' were the country, sightseeing spot attraction, number of visiting places, food satisfaction, activity, tour guide service satisfaction and cost. And then the variables that had the greatest influence were the country, sightseeing spot attraction, and food satisfaction. In addition, in order to grasp the influence of each independent variables more deeply, we used R programming to identify the influence of independent variables. As a result, it was found that the food satisfaction and sightseeing spot attraction were higher than other variables in overall satisfaction and had a greater effect than other influential variables. Revisit intention had a higher ${\beta}$ value in the travel motive as the purpose of Korean Wave than other variables. It will be necessary to have a policy that will lead to a substantial revisit of tourists by enhancing tourist attractions for the purpose of Korean Wave. Lastly, the recommendation had the same result of satisfaction as the sightseeing spot attraction and food satisfaction have higher ${\beta}$ value than other variables. From this analysis, we found that 'food satisfaction' and 'sightseeing spot attraction' variables were the common factors to influence three dependent variables that are mentioned above('Overall satisfaction', 'Revisit intention' and 'Recommendation'), and that those factors affected the satisfaction of travel in Korea significantly. The purpose of this study is to examine how to activate foreign tourists in Korea through big data analysis. It is expected to be used as basic data for analyzing tourism data and establishing effective tourism policy. It is expected to be used as a material to establish an activation plan that can contribute to tourism development in Korea in the future.

Evaluation of the Positional Uncertainty of a Liver Tumor using 4-Dimensional Computed Tomography and Gated Orthogonal Kilovolt Setup Images (사차원전산화단층촬영과 호흡연동 직각 Kilovolt 준비 영상을 이용한 간 종양의 움직임 분석)

  • Ju, Sang-Gyu;Hong, Chae-Seon;Park, Hee-Chul;Ahn, Jong-Ho;Shin, Eun-Hyuk;Shin, Jung-Suk;Kim, Jin-Sung;Han, Young-Yih;Lim, Do-Hoon;Choi, Doo-Ho
    • Radiation Oncology Journal
    • /
    • v.28 no.3
    • /
    • pp.155-165
    • /
    • 2010
  • Purpose: In order to evaluate the positional uncertainty of internal organs during radiation therapy for treatment of liver cancer, we measured differences in inter- and intra-fractional variation of the tumor position and tidal amplitude using 4-dimentional computed radiograph (DCT) images and gated orthogonal setup kilovolt (KV) images taken on every treatment using the on board imaging (OBI) and real time position management (RPM) system. Materials and Methods: Twenty consecutive patients who underwent 3-dimensional (3D) conformal radiation therapy for treatment of liver cancer participated in this study. All patients received a 4DCT simulation with an RT16 scanner and an RPM system. Lipiodol, which was updated near the target volume after transarterial chemoembolization or diaphragm was chosen as a surrogate for the evaluation of the position difference of internal organs. Two reference orthogonal (anterior and lateral) digital reconstructed radiograph (DRR) images were generated using CT image sets of 0% and 50% into the respiratory phases. The maximum tidal amplitude of the surrogate was measured from 3D conformal treatment planning. After setting the patient up with laser markings on the skin, orthogonal gated setup images at 50% into the respiratory phase were acquired at each treatment session with OBI and registered on reference DRR images by setting each beam center. Online inter-fractional variation was determined with the surrogate. After adjusting the patient setup error, orthogonal setup images at 0% and 50% into the respiratory phases were obtained and tidal amplitude of the surrogate was measured. Measured tidal amplitude was compared with data from 4DCT. For evaluation of intra-fractional variation, an orthogonal gated setup image at 50% into the respiratory phase was promptly acquired after treatment and compared with the same image taken just before treatment. In addition, a statistical analysis for the quantitative evaluation was performed. Results: Medians of inter-fractional variation for twenty patients were 0.00 cm (range, -0.50 to 0.90 cm), 0.00 cm (range, -2.40 to 1.60 cm), and 0.00 cm (range, -1.10 to 0.50 cm) in the X (transaxial), Y (superior-inferior), and Z (anterior-posterior) directions, respectively. Significant inter-fractional variations over 0.5 cm were observed in four patients. Min addition, the median tidal amplitude differences between 4DCTs and the gated orthogonal setup images were -0.05 cm (range, -0.83 to 0.60 cm), -0.15 cm (range, -2.58 to 1.18 cm), and -0.02 cm (range, -1.37 to 0.59 cm) in the X, Y, and Z directions, respectively. Large differences of over 1 cm were detected in 3 patients in the Y direction, while differences of more than 0.5 but less than 1 cm were observed in 5 patients in Y and Z directions. Median intra-fractional variation was 0.00 cm (range, -0.30 to 0.40 cm), -0.03 cm (range, -1.14 to 0.50 cm), 0.05 cm (range, -0.30 to 0.50 cm) in the X, Y, and Z directions, respectively. Significant intra-fractional variation of over 1 cm was observed in 2 patients in Y direction. Conclusion: Gated setup images provided a clear image quality for the detection of organ motion without a motion artifact. Significant intra- and inter-fractional variation and tidal amplitude differences between 4DCT and gated setup images were detected in some patients during the radiation treatment period, and therefore, should be considered when setting up the target margin. Monitoring of positional uncertainty and its adaptive feedback system can enhance the accuracy of treatments.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.