• Title/Summary/Keyword: complex system

Search Result 8,922, Processing Time 0.046 seconds

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Mid-term results of IntracardiacLateral Tunnel Fontan Procedure in the Treatment of Patients with a Functional Single Ventricle (기능적 단심실 환자에 대한 심장내 외측통로 폰탄술식의 중기 수술성적)

  • 이정렬;김용진;노준량
    • Journal of Chest Surgery
    • /
    • v.31 no.5
    • /
    • pp.472-480
    • /
    • 1998
  • We reviewed the surgical results of intracardiac lateral tunnel Fontan procedure for the repair of functional single ventricles. Between 1990 and 1996, 104 patients underwent total cavopulmonary anastomosis. Patients' age and body weight averaged 35.9(range 10 to 173) months and 12.8(range 6.5 to 37.8) kg. Preoperative diagnoses included 18 tricuspid atresias and 53 double inlet ventricles with univentricular atrioventricular connection and 33 other complex lesions. Previous palliative operations were performed in 50 of these patients, including 37 systemic to pulmonary artery shunts, 13 pulmonary artery bandings, 15 surgical atrial septectomies, 2 arterial switch procedures, 2 resections of subaortic conus, 2 repairs of total anomalous pulmonary venous connection and 1 Damus-Stansel-Kaye procedure. In 19 patients bidirectional cavopulmonary shunt operation was performed before the Fontan procedure and in 1 patient a Kawashima procedure was required. Preoperative hemodynamics revealed a mean pulmonary artery pressure of 14.6(range 5 to 28) mmHg, a mean pulmonary vascular resistance of 2.2(range 0.4 to 6.9) wood-unit, a mean pulmonary to systemic flow ratio of 0.9(range 0.3 to 3.0), a mean ventricular end-diastolic pressure of 9.0 (range 3.0 to 21.0) mmHg, and a mean arterial oxygen saturation of 76.0(range 45.6 to 88.0)%. The operative procedure consisted of a longitudinal right atriotomy 2cm lateral to the terminal crest up to the right atrial auricle, followed by the creation of a lateral tunnel connecting the orifices of either the superior caval vein or the right atrial auricle to the inferior caval vein, using a Gore-Tex vascular graft with or without a fenestration. Concomitant procedures at the time of Fontan procedure included 22 pulmonary artery angioplasties, 21 atrial septectomies, 4 atrioventricular valve replacements or repairs, 4 corrections of anomalous pulmonary venous connection, and 3 permanent pacemaker implantations. In 31, a fenestration was created, and in 1 an adjustable communication was made in the lateral tunnel pathway. One lateral tunnel conversion was performed in a patient with recurrent intractable tachyarrhythmia 4 years after the initial atriopulmonary connection. Post-extubation hemodynamic data revealed a mean pulmonary artery pressure of 12.7(range 8 to 21) mmHg, a mean ventricular end-diastolic pressure of 7.6(range 4 to 12) mmHg, and a mean room-air arterial oxygen saturation of 89.9(range 68 to 100) %. The follow-up duration was, on average, 27(range 1 to 85) months. Post-Fontan complications included 11 prolonged pleural effusions, 8 arrhythmias, 9 chylothoraces, 5 of damage to the central nervous system, 5 infectious complications, and 4 of acute renal failure. Seven early(6.7%) and 5 late(4.8%) deaths occured. These results proved that the lateral tunnel Fontan procedure provided excellent hemodynamic improvements with acceptable mortality and morbidity for hearts with various types of functional single ventricle.

  • PDF

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Different Look, Different Feel: Social Robot Design Evaluation Model Based on ABOT Attributes and Consumer Emotions (각인각색, 각봇각색: ABOT 속성과 소비자 감성 기반 소셜로봇 디자인평가 모형 개발)

  • Ha, Sangjip;Lee, Junsik;Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.55-78
    • /
    • 2021
  • Tosolve complex and diverse social problems and ensure the quality of life of individuals, social robots that can interact with humans are attracting attention. In the past, robots were recognized as beings that provide labor force as they put into industrial sites on behalf of humans. However, the concept of today's robot has been extended to social robots that coexist with humans and enable social interaction with the advent of Smart technology, which is considered an important driver in most industries. Specifically, there are service robots that respond to customers, the robots that have the purpose of edutainment, and the emotionalrobots that can interact with humans intimately. However, popularization of robots is not felt despite the current information environment in the modern ICT service environment and the 4th industrial revolution. Considering social interaction with users which is an important function of social robots, not only the technology of the robots but also other factors should be considered. The design elements of the robot are more important than other factors tomake consumers purchase essentially a social robot. In fact, existing studies on social robots are at the level of proposing "robot development methodology" or testing the effects provided by social robots to users in pieces. On the other hand, consumer emotions felt from the robot's appearance has an important influence in the process of forming user's perception, reasoning, evaluation and expectation. Furthermore, it can affect attitude toward robots and good feeling and performance reasoning, etc. Therefore, this study aims to verify the effect of appearance of social robot and consumer emotions on consumer's attitude toward social robot. At this time, a social robot design evaluation model is constructed by combining heterogeneous data from different sources. Specifically, the three quantitative indicator data for the appearance of social robots from the ABOT Database is included in the model. The consumer emotions of social robot design has been collected through (1) the existing design evaluation literature and (2) online buzzsuch as product reviews and blogs, (3) qualitative interviews for social robot design. Later, we collected the score of consumer emotions and attitudes toward various social robots through a large-scale consumer survey. First, we have derived the six major dimensions of consumer emotions for 23 pieces of detailed emotions through dimension reduction methodology. Then, statistical analysis was performed to verify the effect of derived consumer emotionson attitude toward social robots. Finally, the moderated regression analysis was performed to verify the effect of quantitatively collected indicators of social robot appearance on the relationship between consumer emotions and attitudes toward social robots. Interestingly, several significant moderation effects were identified, these effects are visualized with two-way interaction effect to interpret them from multidisciplinary perspectives. This study has theoretical contributions from the perspective of empirically verifying all stages from technical properties to consumer's emotion and attitudes toward social robots by linking the data from heterogeneous sources. It has practical significance that the result helps to develop the design guidelines based on consumer emotions in the design stage of social robot development.

Creativity of the Unconscious and Religion : Focusing on Christianity (무의식의 창조성과 종교 : 그리스도교를 중심으로)

  • Jung-Taek Kim
    • Sim-seong Yeon-gu
    • /
    • v.26 no.1
    • /
    • pp.36-66
    • /
    • 2011
  • The goal of this article is to examine the connection between creativity of unconscious and religion. Jung criticized how Freud's approach in studying the unconscious as a scientific inquiry focuses on the unconscious as reflecting only those which is repressed by the ego. Jung conceived of the unconscious as encompassing not only the repressed but also the variety of other psychic materials that have not reached the threshold of the consciousness in its range. Moreover, since human psyche is as individualistic as is a collective phenomenon, the collective psyche is thought to be pervasive at the bottom of the psychic functioning and the conscious and the personal unconscious comprising the upper level of the psychic functioning. Through clinical and personal experience, Jung had come to a realization that the unconscious has the self-regulatory function. The unconscious can make "demands" and also can retract its demands. Jung saw this as the autonomous function of the unconscious. And this autonomous unconscious creates, through dreams and fantasies, images that include an abundance of ideas and feelings. These creative images the unconscious produces assist and lead the "individuation process" which leads to the discovery of the Self. Because this unconscious process compensates the conscious ego, it has the necessary ingredients for self-regulation and can function in a creative and autonomous fashion. Jung saw religion as a special attitude of human psyche, which can be explained by careful and diligent observation about a dynamic being or action, which Rudolph Otto called the Numinosum. This kind of being or action does not get elicited by artificial or willful action. On the contrary, it takes a hold and dominates the human subject. Jung distinguished between religion and religious sector or denomination. He explained religious sector as reflecting the contents of sanctified and indoctrinated religious experiences. It is fixated in the complex organization of ritualized thoughts. And this ritualization gives rise to a system that is fixated. There is a clear goal in the religious sector to replace intellectual experiences with firmly established dogma and rituals. Religion as Jung experienced is the attitude of contemplation about Numinosum, which is formed by the images of the collective unconscious that is propelled by the creativity and autonomy of the unconscious. Religious sector is a religious community that is formed by these images that are ritualized. Jung saw religion as the relationship with the best or the uttermost value. And this relationship has a duality of being involuntary and reflecting free will. Therefore people can be influenced by one value, overcome with the unconscious being charged with psychic energy, or could accept it on a conscious level. Jung saw God as the dominating psychic element among humans or that psychic reality itself. Although Jung grew up in the atmosphere of the traditional Swiss reformed church, it does not seem that he considered himself to be a devoted Christian. To Jung, Christianity is a habitual, ritualized institution, which lacked vitality because it did not have the intellectual honesty or spiritual energy. However, Jung's encounter with the dramatic religious experience at age 12 through hallucination led him to perceive the existence of living god in his unconscious. This is why the theological questions and religious problems in everyday life became Jung's life-long interest. To this author, the reason why Jung delved into problems with religion has to do with his personal interest and love for the revival of the Christian church which had lost its spiritual vitality and depth and had become heavily ritualized.

Critical Success Factor of Noble Payment System: Multiple Case Studies (새로운 결제서비스의 성공요인: 다중사례연구)

  • Park, Arum;Lee, Kyoung Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.59-87
    • /
    • 2014
  • In MIS field, the researches on payment services are focused on adoption factors of payment service using behavior theories such as TRA(Theory of Reasoned Action), TAM(Technology Acceptance Model), and TPB (Theory of Planned Behavior). The previous researches presented various adoption factors according to types of payment service, nations, culture and so on even though adoption factors of identical payment service were presented differently by researchers. The payment service industry relatively has strong path dependency to the existing payment methods so that the research results on the identical payment service are different due to payment culture of nation. This paper aims to suggest a successful adoption factor of noble payment service regardless of nation's culture and characteristics of payment and prove it. In previous researches, common adoption factors of payment service are convenience, ease of use, security, convenience, speed etc. But real cases prove the fact that adoption factors that the previous researches present are not always critical to success to penetrate a market. For example, PayByPhone, NFC based parking payment service, successfully has penetrated to early market and grown. In contrast, Google Wallet service failed to be adopted to users despite NFC based payment method which provides convenience, security, ease of use. As shown in upper case, there remains an unexplained aspect. Therefore, the present research question emerged from the question: "What is the more essential and fundamental factor that should takes precedence over factors such as provides convenience, security, ease of use for successful penetration to market". With these cases, this paper analyzes four cases predicted on the following hypothesis and demonstrates it. "To successfully penetrate a market and sustainably grow, new payment service should find non-customer of the existing payment service and provide noble payment method so that they can use payment method". We give plausible explanations for the hypothesis using multiple case studies. Diners club, Danal, PayPal, Square were selected as a typical and successful cases in each category of payment service. The discussion on cases is primarily non-customer analysis that noble payment service targets on to find the most crucial factor in the early market, we does not attempt to consider factors for business growth. We clarified three-tier non-customer of the payment method that new payment service targets on and elaborated how new payment service satisfy them. In case of credit card, this payment service target first tier of non-customer who can't pay for because they don't have any cash temporarily but they have regular income. So credit card provides an opportunity which they can do economic activities by delaying the date of payment. In a result of wireless phone payment's case study, this service targets on second of non-customer who can't use online payment because they concern about security or have to take a complex process and learn how to use online payment method. Therefore, wireless phone payment provides very convenient payment method. Especially, it made group of young pay for a little money without a credit card. Case study result of PayPal, online payment service, shows that it targets on second tier of non-customer who reject to use online payment service because of concern about sensitive information leaks such as passwords and credit card details. Accordingly, PayPal service allows users to pay online without a provision of sensitive information. Final Square case result, Mobile POS -based payment service, also shows that it targets on second tier of non-customer who can't individually transact offline because of cash's shortness. Hence, Square provides dongle which function as POS by putting dongle in earphone terminal. As a result, four cases made non-customer their customer so that they could penetrate early market and had been extended their market share. Consequently, all cases supported the hypothesis and it is highly probable according to 'analytic generation' that case study methodology suggests. We present for judging the quality of research designs the following. Construct validity, internal validity, external validity, reliability are common to all social science methods, these have been summarized in numerous textbooks(Yin, 2014). In case study methodology, these also have served as a framework for assessing a large group of case studies (Gibbert, Ruigrok & Wicki, 2008). Construct validity is to identify correct operational measures for the concepts being studied. To satisfy construct validity, we use multiple sources of evidence such as the academic journals, magazine and articles etc. Internal validity is to seek to establish a causal relationship, whereby certain conditions are believed to lead to other conditions, as distinguished from spurious relationships. To satisfy internal validity, we do explanation building through four cases analysis. External validity is to define the domain to which a study's findings can be generalized. To satisfy this, replication logic in multiple case studies is used. Reliability is to demonstrate that the operations of a study -such as the data collection procedures- can be repeated, with the same results. To satisfy this, we use case study protocol. In Korea, the competition among stakeholders over mobile payment industry is intensifying. Not only main three Telecom Companies but also Smartphone companies and service provider like KakaoTalk announced that they would enter into mobile payment industry. Mobile payment industry is getting competitive. But it doesn't still have momentum effect notwithstanding positive presumptions that will grow very fast. Mobile payment services are categorized into various technology based payment service such as IC mobile card and Application payment service of cloud based, NFC, sound wave, BLE(Bluetooth Low Energy), Biometric recognition technology etc. Especially, mobile payment service is discontinuous innovations that users should change their behavior and noble infrastructure should be installed. These require users to learn how to use it and cause infra-installation cost to shopkeepers. Additionally, payment industry has the strong path dependency. In spite of these obstacles, mobile payment service which should provide dramatically improved value as a products and service of discontinuous innovations is focusing on convenience and security, convenience and so on. We suggest the following to success mobile payment service. First, non-customers of the existing payment service need to be identified. Second, needs of them should be taken. Then, noble payment service provides non-customer who can't pay by the previous payment method to payment method. In conclusion, mobile payment service can create new market and will result in extension of payment market.

The Effects of Online Service Quality on Consumer Satisfaction and Loyalty Intention -About Booking and Issuing Air Tickets on Website- (온라인 서비스 품질이 고객만족 및 충성의도에 미치는 영향 -항공권 예약.발권 웹사이트를 중심으로-)

  • Park, Jong-Gee;Ko, Do-Eun;Lee, Seung-Chang
    • Journal of Distribution Research
    • /
    • v.15 no.3
    • /
    • pp.71-110
    • /
    • 2010
  • 1. Introduction Today Internet is recognized as an important way for the transaction of products and services. According to the data surveyed by the National Statistical Office, the on-line transaction in 2007 for a year, 15.7656 trillion, shows a 17.1%(2.3060 trillion won) increase over last year, of these, the amount of B2C has been increased 12.0%(10.2258 trillion won). Like this, because the entry barrier of on-line market of Korea is low, many retailers could easily enter into the market. So the bigger its scale is, but on the other hand, the tougher its competition is. Particularly due to the Internet and innovation of IT, the existing market has been changed into the perfect competitive market(Srinivasan, Rolph & Kishore, 2002). In the early years of on-line business, they think that the main reason for success is a moderate price, they are awakened to its importance of on-line service quality with tough competition. If it's not sure whether customers can be provided with what they want, they can use the Web sites, perhaps they can trust their products that had been already bought or not, they have a doubt its viability(Parasuraman, Zeithaml & Malhotra, 2005). Customers can directly reserve and issue their air tickets irrespective of place and time at the Web sites of travel agencies or airlines, but its empirical studies about these Web sites for reserving and issuing air tickets are insufficient. Therefore this study goes on for following specific objects. First object is to measure service quality and service recovery of Web sites for reserving and issuing air tickets. Second is to look into whether above on-line service quality and on-line service recovery have an impact on overall service quality. Third is to seek for the relation with overall service quality and customer satisfaction, then this customer satisfaction and loyalty intention. 2. Theoretical Background 2.1 On-line Service Quality Barnes & Vidgen(2000; 2001a; 2001b; 2002) had invented the tool to measure Web sites' quality four times(called WebQual). The WebQual 1.0, Step one invented a measuring item for information quality based on QFD, and this had been verified by students of UK business school. The Web Qual 2.0, Step two invented for interaction quality, and had been judged by customers of on-line bookshop. The WebQual 3.0, Step three invented by consolidating the WebQual 1.0 for information quality and the WebQual2.0 for interactionquality. It includes 3-quality-dimension, information quality, interaction quality, site design, and had been assessed and confirmed by auction sites(e-bay, Amazon, QXL). Furtheron, through the former empirical studies, the authors changed sites quality into usability by judging that usability is a concept how customers interact with or perceive Web sites and It is used widely for accessing Web sites. By this process, WebQual 4.0 was invented, and is consist of 3-quality-dimension; information quality, interaction quality, usability, 22 items. However, because WebQual 4.0 is focusing on technical part, it's usable at the Website's design part, on the other hand, it's not usable at the Web site's pleasant experience part. Parasuraman, Zeithaml & Malhorta(2002; 2005) had invented the measure for measuring on-line service quality in 2002 and 2005. The study in 2002 divided on-line service quality into 5 dimensions. But these were not well-organized, so there needed to be studied again totally. So Parasuraman, Zeithaml & Malhorta(2005) re-worked out the study about on-line service quality measure base on 2002's study and invented E-S-QUAL. After they invented preliminary measure for on-line service quality, they made up a question for customers who had purchased at amazon.com and walmart.com and reassessed this measure. And they perfected an invention of E-S-QUAL consists of 4 dimensions, 22 items of efficiency, system availability, fulfillment, privacy. Efficiency measures assess to sites and usability and others, system availability measures accurate technical function of sites and others, fulfillment measures promptness of delivering products and sufficient goods and others and privacy measures the degree of protection of data about their customers and so on. 2.2 Service Recovery Service industries tend to minimize the losses by coping with service failure promptly. This responses of service providers to service failure mean service recovery(Kelly & Davis, 1994). Bitner(1990) went on his study from customers' view about service providers' behavior for customers to recognize their satisfaction/dissatisfaction at service point. According to them, to manage service failure successfully, exact recognition of service problem, an apology, sufficient description about service failure and some tangible compensation are important. Parasuraman, Zeithaml & Malhorta(2005) approached the service recovery from how to measure, rather than how to manage, and moved to on-line market not to off-line, then invented E-RecS-QUAL which is a measuring tool about on-line service recovery. 2.3 Customer Satisfaction The definition of customer satisfaction can be divided into two points of view. First, they approached customer satisfaction from outcome of comsumer. Howard & Sheth(1969) defined satisfaction as 'a cognitive condition feeling being rewarded properly or improperly for their sacrifice.' and Westbrook & Reilly(1983) also defined customer satisfaction/dissatisfaction as 'a psychological reaction to the behavior pattern of shopping and purchasing, the display condition of retail store, outcome of purchased goods and service as well as whole market.' Second, they approached customer satisfaction from process. Engel & Blackwell(1982) defined satisfaction as 'an assessment of a consistency in chosen alternative proposal and their belief they had with them.' Tse & Wilton(1988) defined customer satisfaction as 'a customers' reaction to discordance between advance expectation and ex post facto outcome.' That is, this point of view that customer satisfaction is process is the important factor that comparing and assessing process what they expect and outcome of consumer. Unlike outcome-oriented approach, process-oriented approach has many advantages. As process-oriented approach deals with customers' whole expenditure experience, it checks up main process by measuring one by one each factor which is essential role at each step. And this approach enables us to check perceptual/psychological process formed customer satisfaction. Because of these advantages, now many studies are adopting this process-oriented approach(Yi, 1995). 2.4 Loyalty Intention Loyalty has been studied by dividing into behavioral approaches, attitudinal approaches and complex approaches(Dekimpe et al., 1997). In the early years of study, they defined loyalty focusing on behavioral concept, behavioral approaches regard customer loyalty as "a tendency to purchase periodically within a certain period of time at specific retail store." But the loyalty of behavioral approaches focuses on only outcome of customer behavior, so there are someone to point the limits that customers' decision-making situation or process were neglected(Enis & Paul, 1970; Raj, 1982; Lee, 2002). So the attitudinal approaches were suggested. The attitudinal approaches consider loyalty contains all the cognitive, emotional, voluntary factors(Oliver, 1997), define the customer loyalty as "friendly behaviors for specific retail stores." However these attitudinal approaches can explain that how the customer loyalty form and change, but cannot say positively whether it is moved to real purchasing in the future or not. This is a kind of shortcoming(Oh, 1995). 3. Research Design 3.1 Research Model Based on the objects of this study, the research model derived is

    . 3.2 Hypotheses 3.2.1 The Hypothesis of On-line Service Quality and Overall Service Quality The relation between on-line service quality and overall service quality I-1. Efficiency of on-line service quality may have a significant effect on overall service quality. I-2. System availability of on-line service quality may have a significant effect on overall service quality. I-3. Fulfillment of on-line service quality may have a significant effect on overall service quality. I-4. Privacy of on-line service quality may have a significant effect on overall service quality. 3.2.2 The Hypothesis of On-line Service Recovery and Overall Service Quality The relation between on-line service recovery and overall service quality II-1. Responsiveness of on-line service recovery may have a significant effect on overall service quality. II-2. Compensation of on-line service recovery may have a significant effect on overall service quality. II-3. Contact of on-line service recovery may have a significant effect on overall service quality. 3.2.3 The Hypothesis of Overall Service Quality and Customer Satisfaction The relation between overall service quality and customer satisfaction III-1. Overall service quality may have a significant effect on customer satisfaction. 3.2.4 The Hypothesis of Customer Satisfaction and Loyalty Intention The relation between customer satisfaction and loyalty intention IV-1. Customer satisfaction may have a significant effect on loyalty intention. 3.2.5 The Hypothesis of a Mediation Variable Wolfinbarger & Gilly(2003) and Parasuraman, Zeithaml & Malhotra(2005) had made clear that each dimension of service quality has a significant effect on overall service quality. Add to this, the authors analyzed empirically that each dimension of on-line service quality has a positive effect on customer satisfaction. With that viewpoint, this study would examine if overall service quality mediates between on-line service quality and each dimension of customer satisfaction, keeping on looking into the relation between on-line service quality and overall service quality, overall service quality and customer satisfaction. And as this study understands that each dimension of on-line service recovery also has an effect on overall service quality, this would examine if overall service quality also mediates between on-line service recovery and each dimension of customer satisfaction. Therefore these hypotheses followed are set up to examine if overall service quality plays its role as the mediation variable. The relation between on-line service quality and customer satisfaction V-1. Overall service quality may mediate the effects of efficiency of on-line service quality on customer satisfaction. V-2. Overall service quality may mediate the effects of system availability of on-line service quality on customer satisfaction. V-3. Overall service quality may mediate the effects of fulfillment of on-line service quality on customer satisfaction. V-4. Overall service quality may mediate the effects of privacy of on-line service quality on customer satisfaction. The relation between on-line service recovery and customer satisfaction VI-1. Overall service quality may mediate the effects of responsiveness of on-line service recovery on customer satisfaction. VI-2. Overall service quality may mediate the effects of compensation of on-line service recovery on customer satisfaction. VI-3. Overall service quality may mediate the effects of contact of on-line service recovery on customer satisfaction. 4. Empirical Analysis 4.1 Research design and the characters of data This empirical study aimed at customers who ever purchased air ticket at the Web sites for reservation and issue. Total 430 questionnaires were distributed, and 400 were collected. After surveying with the final questionnaire, the frequency test was performed about variables of sex, age which is demographic factors for analyzing general characters of sample data. Sex of data is consist of 146 of male(42.7%) and 196 of female(57.3%), so portion of female is a little higher. Age is composed of 11 of 10s(3.2%), 199 of 20s(58.2%), 105 of 30s(30.7%), 22 of 40s(6.4%), 5 of 50s(1.5%). The reason that portions of 20s and 30s are higher can be supposed that they use the Internet frequently and purchase air ticket directly. 4.2 Assessment of measuring scales This study used the internal consistency analysis to measure reliability, and then used the Cronbach'$\alpha$ to assess this. As a result of reliability test, Cronbach'$\alpha$ value of every component shows more than 0.6, it is found that reliance of the measured variables are ensured. After reliability test, the explorative factor analysis was performed. the factor sampling was performed by the Principal Component Analysis(PCA), the factor rotation was performed by the Varimax which is good for verifying mutual independence between factors. By the result of the initial factor analysis, items blocking construct validity were removed, and the result of the final factor analysis performed for verifying construct validity is followed above. 4.3 Hypothesis Testing 4.3.1 Hypothesis Testing by the Regression Analysis(SPSS) 4.3.2 Analysis of Mediation Effect To verify mediation effect of overall service quality of and , this study used the phased analysis method proposed by Baron & Kenny(1986) generally used. As shows, Step 1 and Step 2 are significant, and mediation variable has a significant effect on dependent variables and so does independent variables at Step 3, too. And there needs to prove the partial mediation effect, independent variable's estimate ability at Step 3(Standardized coefficient $\beta$eta : efficiency=.164, system availability=.074, fulfillment=.108, privacy=.107) is smaller than its estimate ability at Step 2(Standardized coefficient $\beta$eta : efficiency=.409, system availability=.227, fulfillment=.386, privacy=.237), so it was proved that overall service quality played a role as the partial mediation between on-line service quality and satisfaction. As
    shows, Step 1 and Step 2 are significant, and mediation variable has a significant effect on dependent variables and so does independent variables at Step 3, too. And there needs to prove the partial mediation effect, independent variable's estimate ability at Step 3(Standardized coefficient $\beta$eta : responsiveness=.164, compensation=.117, contact=.113) is smaller than its estimate ability at Step 2(Standardized coefficient $\beta$eta : responsiveness=.409, compensation=.386, contact=.237), so it was proved that overall service quality played a role as the partial mediation between on-line service recovery and satisfaction. Verified results on the basis of empirical analysis are followed. First, as the result of , it shows that all were chosen, so on-line service quality has a positive effect on overall service quality. Especially fulfillment of overall service quality has the most effect, and then efficiency, system availability, privacy in order. Second, as the result of , it shows that all were chosen, so on-line service recovery has a positive effect on overall service quality. Especially responsiveness of overall service quality has the most effect, and then contact, compensation in order. Third, as the result of and , it shows that and all were chosen, so overall service quality has a positive effect on customer satisfaction, customer satisfaction has a positive effect on loyalty intention. Fourth, as the result of and , it shows that and all were chosen, so overall service quality plays a role as the partial mediation between on-line service quality and customer satisfaction, on-line service recovery and customer satisfaction. 5. Conclusion This study measured and analyzed service quality and service recovery of the Web sites that customers made a reservation and issued their air tickets, and by improving customer satisfaction through the result, this study put its final goal to grope how to keep loyalty customers. On the basis of the result of empirical analysis, suggestion points of this study are followed. First, this study regarded E-S-QUAL that measures on-line service quality and E-RecS-QUAL that measures on-line service recovery as variables, so it overcame the limit of existing studies that used modified SERVQUAL to measure service quality of the Web sites. Second, it shows that fulfillment and efficiency of on-line service quality have the most significant effect on overall service quality. Therefore the Web sites of reserving and issuing air tickets should try harder to elevate efficiency and fulfillment. Third, privacy of on-line service quality has the least significant effect on overall service quality, but this may be caused by un-assurance of customers whether the Web sites protect safely their confidential information or not. So they need to notify customers of this fact clearly. Fourth, there are many cases that customers don't recognize the importance of on-line service recovery, but if they would think that On-line service recovery has an effect on customer satisfaction and loyalty intention, as its importance is very significant they should prepare for that. Fifth, because overall service quality has a positive effect on customer satisfaction and loyalty intention, they should try harder to elevate service quality and service recovery of the Web sites of reserving and issuing air tickets to maximize customer satisfaction and to secure loyalty customers. Sixth, it is found that overall service quality plays a role as the partial mediation, but now there are rarely existing studies about this, so there need to be more studies about this.

  • PDF
  • Experimental investigation of the photoneutron production out of the high-energy photon fields at linear accelerator (고에너지 방사선치료 시 치료변수에 따른 광중성자 선량 변화 연구)

    • Kim, Yeon Su;Yoon, In Ha;Bae, Sun Myeong;Kang, Tae Young;Baek, Geum Mun;Kim, Sung Hwan;Nam, Uk Won;Lee, Jae Jin;Park, Yeong Sik
      • The Journal of Korean Society for Radiation Therapy
      • /
      • v.26 no.2
      • /
      • pp.257-264
      • /
      • 2014
    • Purpose : Photoneutron dose in high-energy photon radiotherapy at linear accelerator increase the risk for secondary cancer. The purpose of this investigation is to evaluate the dose variation of photoneutron with different treatment method, flattening filter, dose rate and gantry angle in radiation therapy with high-energy photon beam ($E{\geq}8MeV$). Materials and Methods : TrueBeam $ST{\time}TM$(Ver1.5, Varian, USA) and Korea Tissue Equivalent Proportional Counter (KTEPC) were used to detect the photoneutron dose out of the high-energy photon field. Complex Patient plans using Eclipse planning system (Version 10.0, Varian, USA) was used to experiment with different treatment technique(IMRT, VMAT), condition of flattening filter and three different dose rate. Scattered photoneutron dose was measured at eight different gantry angles with open field (Field size : $5{\time}5cm$). Results : The mean values of the detected photoneutron dose from IMRT and VMAT were $449.7{\mu}Sv$, $2940.7{\mu}Sv$. The mean values of the detected photoneutron dose with Flattening Filter(FF) and Flattening Filter Free(FFF) were measured as $2940.7{\mu}Sv$, $232.0{\mu}Sv$. The mean values of the photoneutron dose for each test plan (case 1, case 2 and case 3) with FFF at the three different dose rate (400, 1200, 2400 MU/min) were $3242.5{\mu}Sv$, $3189.4{\mu}Sv$, $3191.2{\mu}Sv$ with case 1, $3493.2{\mu}Sv$, $3482.6{\mu}Sv$, $3477.2{\mu}Sv$ with case 2 and $4592.2{\mu}Sv$, $4580.0{\mu}Sv$, $4542.3{\mu}Sv$ with case 3, respectively. The mean values of the photoneutron dose at eight different gantry angles ($0^{\circ}$, $45^{\circ}$, $90^{\circ}$, $135^{\circ}$, $180^{\circ}$, $225^{\circ}$, $270^{\circ}$, $315^{\circ}$) were measured as $3.2{\mu}Sv$, $4.3{\mu}Sv$, $5.3{\mu}Sv$, $11.3{\mu}Sv$, $14.7{\mu}Sv$, $11.2{\mu}Sv$, $3.7{\mu}Sv$, $3.0{\mu}Sv$ at 10MV and as $373.7{\mu}Sv$, $369.6{\mu}Sv$, $384.4{\mu}Sv$, $423.6{\mu}Sv$, $447.1{\mu}Sv$, $448.0{\mu}Sv$, $384.5{\mu}Sv$, $377.3{\mu}Sv$ at 15MV. Conclusion : As a result, it is possible to reduce photoneutron dose using FFF mode and VMAT method with TrueBeam $ST{\time}TM$. The risk for secondary cancer of the patients will be decreased with continuous evaluation of the photoneutron dose.