• Title/Summary/Keyword: Business management performance

Search Result 3,889, Processing Time 0.034 seconds

A Machine Learning-based Total Production Time Prediction Method for Customized-Manufacturing Companies (주문생산 기업을 위한 기계학습 기반 총생산시간 예측 기법)

  • Park, Do-Myung;Choi, HyungRim;Park, Byung-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.177-190
    • /
    • 2021
  • Due to the development of the fourth industrial revolution technology, efforts are being made to improve areas that humans cannot handle by utilizing artificial intelligence techniques such as machine learning. Although on-demand production companies also want to reduce corporate risks such as delays in delivery by predicting total production time for orders, they are having difficulty predicting this because the total production time is all different for each order. The Theory of Constraints (TOC) theory was developed to find the least efficient areas to increase order throughput and reduce order total cost, but failed to provide a forecast of total production time. Order production varies from order to order due to various customer needs, so the total production time of individual orders can be measured postmortem, but it is difficult to predict in advance. The total measured production time of existing orders is also different, which has limitations that cannot be used as standard time. As a result, experienced managers rely on persimmons rather than on the use of the system, while inexperienced managers use simple management indicators (e.g., 60 days total production time for raw materials, 90 days total production time for steel plates, etc.). Too fast work instructions based on imperfections or indicators cause congestion, which leads to productivity degradation, and too late leads to increased production costs or failure to meet delivery dates due to emergency processing. Failure to meet the deadline will result in compensation for delayed compensation or adversely affect business and collection sectors. In this study, to address these problems, an entity that operates an order production system seeks to find a machine learning model that estimates the total production time of new orders. It uses orders, production, and process performance for materials used for machine learning. We compared and analyzed OLS, GLM Gamma, Extra Trees, and Random Forest algorithms as the best algorithms for estimating total production time and present the results.

Characteristics of Environmental Factors and Vegetation Community of Zabelia tyaihyonii (Nakai) Hisauti & H.Hara among the Target Plant Species for Conservation in Baekdudaegan (백두대간 중점보전종인 댕강나무의 식생 군집 및 환경인자 특성)

  • Kim, Ji-Dong;Lee, Hye-Jeong;Lee, Dong-Hyuk;Byeon, Jun Gi;Park, Byeong Joo;Heo, Tae-Im
    • Journal of Korean Society of Forest Science
    • /
    • v.111 no.2
    • /
    • pp.201-223
    • /
    • 2022
  • Currently, species extinctions are increasing due to climate change and continued anthropogenic impact. We selected 300 species for conservation with emphasis on plants co-occurring in the Baekdudaegan area, which is a large ecological axis of Korea. We aimed to investigate the vegetation community and environmental characteristics of Zabelia tyaihyonii in the limestone habitat among the target plant species in the Baekdudaegan region to derive effective conservation strategies. In Danyang-gun, Yeongwol-gun, and Jecheon-si, we selected 36 investigation sites where Z. tyaihyonii was present. We investigated the vegetation, flora, soil and physical environment. We also found notable plants such as Thalictrum petaloideum, Sillaphyton podagraria, and Neillia uekii at the investigation sites. We classified forest vegetation community types into 4 vegetation units and 7 species group types. With canonical correspondence analysis (CCA) of the vegetation community and habitat factors, we determined the overall explanatory power to be 75.2%, and we classified the environmental characteristics of the habitat of Z. tyaihyonii into a grouping of three. Among these, we detected a relationship between the environmental factors elevation, slope, organic matter, rock ratio, pH, potassium, and sodium. We identified numerous rare and endemic plants, including Thalictrum petaloideum, in the investigation site, and determined that these groups needed to be preserved at the habitat level. In the classification of the vegetation units analyzed based on the emerging plants and the CCA, we reaffirmed the uniqueness and specificity of the vegetation community in the habitat of Z. tyaihyonii. We anticipate that our results will be used as scientific evidence for the empirical conservation of the native habitats of Z. tyaihyonii.

Cooperative Sales Promotion in Manufacturer-Retailer Channel under Unplanned Buying Potential (비계획구매를 고려한 제조업체와 유통업체의 판매촉진 비용 분담)

  • Kim, Hyun Sik
    • Journal of Distribution Research
    • /
    • v.17 no.4
    • /
    • pp.29-53
    • /
    • 2012
  • As so many marketers get to use diverse sales promotion methods, manufacturer and retailer in a channel often use them too. In this context, diverse issues on sales promotion management arise. One of them is the issue of unplanned buying. Consumers' unplanned buying is clearly better off for the retailer but not for manufacturer. This asymmetric influence of unplanned buying should be dealt with prudently because of its possibility of provocation of channel conflict. However, there have been scarce studies on the sales promotion management strategy considering the unplanned buying and its asymmetric effect on retailer and manufacturer. In this paper, we try to find a better way for a manufacturer in a channel to promote performance through the retailer's sales promotion efforts when there is potential of unplanned buying effect. We investigate via game-theoretic modeling what is the optimal cost sharing level between the manufacturer and retailer when there is unplanned buying effect. We investigated following issues about the topic as follows: (1) What structure of cost sharing mechanism should the manufacturer and retailer in a channel choose when unplanned buying effect is strong (or weak)? (2) How much payoff could the manufacturer and retailer in a channel get when unplanned buying effect is strong (or weak)? We focus on the impact of unplanned buying effect on the optimal cost sharing mechanism for sales promotions between a manufacturer and a retailer in a same channel. So we consider two players in the game, a manufacturer and a retailer who are interacting in a same distribution channel. The model is of complete information game type. In the model, the manufacturer is the Stackelberg leader and the retailer is the follower. Variables in the model are as following table. Manufacturer's objective function in the basic game is as follows: ${\Pi}={\Pi}_1+{\Pi}_2$, where, ${\Pi}_1=w_1(1+L-p_1)-{\psi}^2$, ${\Pi}_2=w_2(1-{\epsilon}L-p_2)$. And retailer's is as follows: ${\pi}={\pi}_1+{\pi}_2$, where, ${\pi}_1=(p_1-w_1)(1+L-p_1)-L(L-{\psi})+p_u(b+L-p_u)$, ${\pi}_2=(p_2-w_2)(1-{\epsilon}L-p_2)$. The model is of four stages in two periods. Stages of the game are as follows. (Stage 1) Manufacturer sets wholesale price of the first period($w_1$) and cost sharing level of channel sales promotion(${\Psi}$). (Stage 2) Retailer sets retail price of the focal brand($p_1$), the unplanned buying item($p_u$), and sales promotion level(L). (Stage 3) Manufacturer sets wholesale price of the second period($w_2$). (Stage 4) Retailer sets retail price of the second period($p_2$). Since the model is a kind of dynamic games, we try to find a subgame perfect equilibrium to derive some theoretical and managerial implications. In order to obtain the subgame perfect equilibrium, we use the backward induction method. In using backward induction approach, we solve the problems backward from stage 4 to stage 1. By completely knowing follower's optimal reaction to the leader's potential actions, we can fold the game tree backward. Equilibrium of each variable in the basic game is as following table. We conducted more analysis of additional game about diverse cost level of manufacturer. Manufacturer's objective function in the additional game is same with that of the basic game as follows: ${\Pi}={\Pi}_1+{\Pi}_2$, where, ${\Pi}_1=w_1(1+L-p_1)-{\psi}^2$, ${\Pi}_2=w_2(1-{\epsilon}L-p_2)$. But retailer's objective function is different from that of the basic game as follows: ${\pi}={\pi}_1+{\pi}_2$, where, ${\pi}_1=(p_1-w_1)(1+L-p_1)-L(L-{\psi})+(p_u-c)(b+L-p_u)$, ${\pi}_2=(p_2-w_2)(1-{\epsilon}L-p_2)$. Equilibrium of each variable in this additional game is as following table. Major findings of the current study are as follows: (1) As the unplanned buying effect gets stronger, manufacturer and retailer had better increase the cost for sales promotion. (2) As the unplanned buying effect gets stronger, manufacturer had better decrease the cost sharing portion of total cost for sales promotion. (3) Manufacturer's profit is increasing function of the unplanned buying effect. (4) All results of (1),(2),(3) are alleviated by the increase of retailer's procurement cost to acquire unplanned buying items. The authors discuss the implications of those results for the marketers in manufacturers or retailers. The current study firstly suggests some managerial implications for the manufacturer how to share the sales promotion cost with the retailer in a channel to the high or low level of the consumers' unplanned buying potential.

  • PDF

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

The Roles of Service Failure and Recovery Satisfaction in Customer-Firm Relationship Restoration : Focusing on Carry-over effect and Dynamics among Customer Affection, Customer Trust and Loyalty Intention Before and After the Events (서비스실패의 심각성과 복구만족이 고객-기업 관계회복에 미치는 영향 : 실패이전과 복구이후 고객애정, 고객신뢰, 충성의도의 이월효과 및 역학관계 비교를 중심으로)

  • La, Sun-A
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.1-36
    • /
    • 2012
  • Service failure is one of the major reasons for customer defection. As the business environment gets tougher and more competitive, a single service failure might bring about fatal consequences to a service provider or a firm. Sometimes a failure won't end up with an unsatisfied customer's simple complaining but with a wide-spread animosity against the service provider or the firm, leading to a threat to the firm's survival itself in the society. Therefore, we are in need of comprehensive understandings of complainants' attitudes and behaviors toward service failures and firm's recovery efforts. Even though a failure itself couldn't be fixed completely, marketers should repair the mind and heart of unsatisfied customers, which can be regarded as an successful recovery strategy in the end. As the outcome of recovery efforts exerted by service providers or firms, recovery of the relationship between customer and service provider need to put on the top in the recovery goal list. With these motivations, the study investigates how service failure and recovery makes the changes in dynamics of fundamental elements of customer-firm relationship, such as customer affection, customer trust and loyalty intention by comparing two time points, before the service failure and after the recovery, focusing on the effects of recovery satisfaction and the failure severity. We adopted La & Choi (2012)'s framework for development of the research model that was based on the previous research stream like Yim et al. (2008) and Thomson et al. (2005). The pivotal background theories of the model are mainly from relationship marketing and social relationships of social psychology. For example, Love, Emotional attachment, Intimacy, and Equity theories regarding human relationships were reviewed. As the results, when recovery satisfaction is high, customer affection and customer trust that were established before the service failure are carried over to the future after the recovery. However, when recovery satisfaction is low, customer-firm relationship that had already established in the past are not carried over but broken up. Regardless of the degree of recovery satisfaction, once a failure occurs loyalty intention is not carried over to the future and the impact of customer trust on loyalty intention becomes stronger. Such changes imply that customers become more prudent and more risk-aversive than the time prior to service failure. The impact of severity of failure on customer affection and customer trust matters only when recovery satisfaction is low. When recovery satisfaction is high, customer affection and customer trust become severity-proof. Interestingly, regardless of the degree of recovery satisfaction, failure severity has a significant negative influence on loyalty intention. Loyalty intention is the most fragile target when a service failure occurs no matter how severe the failure criticality is. Consequently, the ultimate goal of service recovery should be the restoration of customer-firm relationship and recovery of customer trust should be the primary objective to accomplish for a successful recovery performance. Especially when failure severity is high, service recovery should be perceived highly satisfied by the complainants because failure severity matters more when recovery satisfaction is low. Marketers can implement recovery strategies to enhance emotional appeals as well as fair treatments since the both impacts of affection and trust on loyalty intention are significant. In the case of high severity of failure, recovery efforts should be exerted to overreach customer expectation, designed to directly repair customer trust and elaborately designed in the focus of customer-firm communications during the interactional recovery process to affect customer trust rebuilding indirectly. Because it is a longer and harder way to rebuild customer-firm relationship for high severity cases, low recovery satisfaction cannot guarantee customer retention. To prevent customer defection due to service failure of high severity, unexpected rewards as a recovery will be likely to be useful since those will lead to customer delight or customer gratitude toward the service firm. Based on the results of analyses, theoretical and managerial implications are presented. Limitations and future research ideas are also discussed.

  • PDF

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.