• Title/Summary/Keyword: PreS1

Search Result 3,667, Processing Time 0.033 seconds

A Study on the Success Factors of Co-Founding Start-up by Step: Focusing on the Case of Opportunity-type Start-up (공동창업의 단계별 성공요인에 관한 연구: 기회형 창업기업 사례를 중심으로)

  • Yun, Seong Man;Sung, Chang Soo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.1
    • /
    • pp.141-158
    • /
    • 2023
  • From the perspective of an entrepreneur, one of the most important factors for understanding the inherent limitations of a startup, reducing the risk of failure, and succeeding is the composition of the talent, that is, the founding team. Therefore, a common concern experienced by entrepreneurs in the pre-entrepreneurship stage or the early stage of startup is the choice between independent startups and co-founding start-up. Nonetheless, in Korea, the share of independent entrepreneurship is significantly higher than that of co-founding start-up. On the other hand, focusing on the fact that many successful global innovative companies are in the form of co-founding start-up, the success factors of co-founding start-up were examined. Most of the related preceding studies are studies that identify the capabilities and characteristics of individual entrepreneurs as factors influencing the survival and success of entrepreneurship, and there is a lack of research on partnerships, that is, co-founding start-up, which are common in the field of entrepreneurship ecosystems. Therefore, this study attempted a multi-case study through in-depth interviews, collection of relevant data, analysis of contextual information, and consideration of previous studies targeting co-founders of domestic startups that succeeded in opportunistic startups. Through this, a model for deriving the phased characteristics and key success factors of co-founding start-up was proposed. As a result of the study, the key element of the preliminary start-up stage was 'opportunity', and the success factors were 'opportunity recognition through entrepreneur's experience' and 'idea development'. The key element in the early stages of start-up is "start-up team," and the success factor is "trust and complement of start-up team," and synergy is shown when "diversity and homogeneity of start-up team" are harmonized. In addition, conflicts between co-founders may occur in the early stages of start-ups, which has a large impact on the survival of start-ups. The conflict between the start-up team could be overcome through constant "mutual understanding and respect through communication" and "clear division of work and role sharing." It was confirmed that the core element of the start-up growth stage was 'resources', and 'securing excellent talent' and 'raising external funds' were important factors for success. These results are expected to overcome the limitations of start-up companies, such as limited resources, lack of experience, and risk of failure, in entrepreneurship studies, and prospective entrepreneurs preparing for a start-up in a situation where the form of co-founding start-up is attracting attention as one of the alternatives to increase the success rate. It has implications for various stakeholders in the entrepreneurial ecosystem.

  • PDF

Lung Injury Indices Depending on Tumor Necrosis Factor-$\alpha$ Level and Novel 35 kDa Protein Synthesis in Lipopolysaccharide-Treated Rat (내독소처치 흰쥐에서 Tumor Necrosis Factor-$\alpha$치 상승에 따른 폐손상 악화 및 35 kDa 단백질 합성)

  • Choi, Young-Mee;Kim, Young-Kyoon;Kwon, Soon-Seog;Kim, Kwan-Hyoung;Moon, Hwa-Sik;Song, Jeong-Sup;Park, Sung-Hak
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.6
    • /
    • pp.1236-1251
    • /
    • 1998
  • Background : TNF-$\alpha$ appears to be a central mediator of the host response to sepsis. While TNF-$\alpha$ is mainly considered a proinflammatory cytokine, it can also act as a direct cytotoxic cytokine. However, there are not so many studies about the relationship bet ween TNF-$\alpha$ level and lung injury severity in ALI, particularly regarding the case of ALI caused by direct lung injury such as diffuse pulmonary infection. Recently, a natural defense mechanism, known as the stress response or the heat shock response, has been reported in cellular or tissue injury reaction. There are a number of reports examining the protective role of pre-induced heat stress proteins on subsequent LPS-induced TNF-$\alpha$ release from monocyte or macrophage and also on subsequent LPS-induced ALI in animals. However it is not well established whether the stress protein synthesis such as HSP can be induced from rat alveolar macrophages by in vitro or in vivo LPS stimulation. Methods : We measured the level of TNF-$\alpha$, the percentage of inflammatory cells in bronchoalveolar lavage fluid, protein synthesis in alveolar macrophages isolated from rats at 1, 2, 3, 4, 6, 12, and 24 hours after intratracheal LPS instillation. We performed histologic examination and also obtained histologic lung injury index score in lungs from other rats at 1, 2, 3, 4, 6, 12, 24 h after intratracheal LPS instillation. Isolated non-stimulated macrophages were incubated for 2 h with different concentration of LPS (0, 1, 10, 100 ng/ml, 1, or 10 ${\mu}g/ml$). Other non-stimulated macrophages were exposed at $43^{\circ}C$ for 15 min, then returned to at $37^{\circ}C$ in 5% CO2-95% for 1 hour, and then incubated for 2 h with LPS (0, 1, 10, 100ng/ml, 1, or 10 ${\mu}g/ml$). Results : TNF-$\alpha$ levels began to increase significantly at 1 h, reached a peak at 3 h (P<0.0001), began to decrease at 6 h, and returned to control level at 12 h after LPS instillation. The percentage of inflammatory cells (neutrophils and alveolar macrophages) began to change significantly at 2 h, reached a peak at 6 h, began to recover but still showed significant change at 12 h, and showed insignificant change at 24 h after LPS instillation compared with the normal control. After LPS instillation, the score of histologic lung injury index reached a maximum value at 6 h and remained steady for 24 hours. 35 kDa protein band was newly synthesized in alveolar macrophage from 1 hour on for 24 hours after LPS instillation. Inducible heat stress protein 72 was not found in any alveolar macrophages obtained from rats after LPS instillation. TNF-$\alpha$ levels in supernatants of LPS-stimulated macro phages were significantly higher than those of non-stimulated macrophages(p<0.05). Following LPS stimulation, TNF-$\alpha$ levels in supernatants were significantly lower after heat treatment than in those without heat treatment (p<0.05). The inducible heat stress protein 72 was not found at any concentrations of LPS stimulation. Whereas the 35 kDa protein band was exclusively found at dose of LPS of 10 ${\mu}g/ml$. Conclusion : TNF-$\alpha$ has a direct or indirect close relationship with lung injury severity in acute lung injury or acute respiratory distress syndrome. In vivo and in vitro LPS stimulation dose not induce heat stress protein 72 in alveolar macrophages. It is likely that 35 kDa protein, synthesized by alveolar macrophage after LPS instillation, does not have a defense role in acute lung injury.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

Shopping Value, Shopping Goal and WOM - Focused on Electronic-goods Buyers (쇼핑 가치 추구 성향에 따른 쇼핑 목표와 공유 의도 차이에 관한 연구 - 전자제품 구매고객을 중심으로)

  • Park, Kyoung-Won;Park, Ju-Young
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.2
    • /
    • pp.68-79
    • /
    • 2009
  • The interplay between hedonic and utilitarian attributes has assumed special significance in recent years; it has been proposed that consumption offerings should be viewed as experiences that stimulate both cognitions and feelings rather than as mere products or services. This research builds on previous work on hedonic versus utilitarian benefits, regulatory focus theory, customer satisfaction to address two question: (1) Is the shopping goal at the point of purchase different from the shopping value? and (2) Is the customer loyalty after the use different from the shopping value and shopping goal? We surveyed 345 peoples those who have bought the electronic-goods within 6 months. This research dealt with the shopping value which is consisted of 2 types, hedonic and utilitarian. Those who pursue the hedonic shopping value may prefer the pleasure of purchasing experience to the product itself. They tend to prefer atmosphere, arousal of the shopping experience. Consistent with previous research, we use the term "hedonic" to refer to their aesthetic, experiential and enjoyment-related value. On the contrary, Those who pursue the utilitarian shopping value may prefer the reasonable buying. It may be more functional. Consistent with previous research, we use the term "utilitarian" to refer to the functional, instrumental, and practical value of consumption offerings. Holbrook(1999) notes that consumer value is an experience that results from the consumption of such benefits. In the context of cell phones for example, the phone's battery life and sound volume are utilitarian benefits, whereas aesthetic appeal from its shape and color are hedonic benefits. Likewise, in the case of a car, fuel economics and safety are utilitarian benefits whereas the sunroof and the luxurious interior are hedonic benefits. The shopping goals are consisted of the promotion focus goal and the prevention focus goal, based on the self-regulatory focus theory. The promotion focus is characterized into focusing ideal self because they are oriented to wishes and vision. The promotion focused individuals are tend to be more risk taking. They are more sensitive to hope and achievement. On the contrary, the prevention focused individuals are characterized into focusing the responsibilities because they are oriented to safety. The prevention focused individuals are tend to be more risk avoiding. We wanted to test the relation among the shopping value, shopping goal and customer loyalty. Customers show the positive or negative feelings comparing with the expectation level which customers have at the point of the purchase. If the result were bigger than the expectation, customers may feel positive feeling such as delight or satisfaction and they would want to share their feelings with other people. And they want to buy those products again in the future time. There is converging evidence that the types of goals consumers expect to be fulfilled by the utilitarian dimension of a product are different from those they seek from the hedonic dimension (Chernev 2004). Specifically, whereas consumers expect the fulfillment of product prevention goals on the utilitarian dimension, they expect the fulfillment of promotion goals on the hedonic dimension (Chernev 2004; Chitturi, Raghunathan, and Majahan 2007; Higgins 1997, 2001) According to the regulatory focus theory, prevention goals are those that ought to be met. Fulfillment of prevention goals in the context of product consumption eliminates or significantly reduces the probability of a painful experience, thus making consumers experience emotions that result from fulfillment of prevention goals such as confidence and securities. On the contrary, fulfillment of promotion goals are those that a person aspires to meet, such as "looking cool" or "being sophisticated." Fulfillment of promotion goals in the context of product consumption significantly increases the probability of a pleasurable experience, thus enabling consumers to experience emotions that result from the fulfillment of promotion goals. The proposed conceptual framework captures that the relationships among hedonic versus utilitarian shopping values and promotion versus prevention shopping goals respectively. An analysis of the consequence of the fulfillment and frustration of utilitarian and hedonic value is theoretically worthwhile. It is also substantively relevant because it helps predict post-consumption behavior such as the promotion versus prevention shopping goals orientation. Because our primary goal is to understand how the post consumption feelings influence the variable customer loyalty: word of mouth (Jacoby and Chestnut 1978). This research result is that the utilitarian shopping value gives the positive influence to both of the promotion and prevention goal. However the influence to the prevention goal is stronger. On the contrary, hedonic shopping value gives influence to the promotion focus goal only. Additionally, both of the promotion and prevention goal show the positive relation with customer loyalty. However, the positive relation with promotion goal and customer loyalty is much stronger. The promotion focus goal gives the influence to the customer loyalty. On the contrary, the prevention focus goal relates at the low level of relation with customer loyalty than that of the promotion goal. It could be explained that it is apt to get framed the compliment of people into 'gain-non gain' situation. As the result, for those who have the promotion focus are motivated to deliver their own feeling to other people eagerly. Conversely the prevention focused individual are more sensitive to the 'loss-non loss' situation. The research result is consistent with pre-existent researches. There is a conceptual parallel between necessities-needs-utilitarian benefits and luxuries-wants-hedonic benefits (Chernev 2004; Chitturi, Raghunathan and Majaha 2007; Higginns 1997; Kivetz and Simonson 2002b). In addition, Maslow's hierarchy of needs and the precedence principle contends luxuries-wants-hedonic benefits higher than necessities-needs-utilitarian benefits. Chitturi, Raghunathan and Majaha (2007) show that consumers are focused more on the utilitarian benefits than on the hedonic benefits of a product until their minimum expectation of fulfilling prevention goals are met. Furthermore, a utilitarian benefit is a promise of a certain level of functionality by the manufacturer or the retailer. When the promise is not fulfilled, customers blame the retailer and/or the manufacturer. When negative feelings are attributable to an entity, customers feel angry. However in the case of hedonic benefit, the customer, not the manufacturer, determines at the time of purchase whether the product is stylish and attractive. Under such circumstances, customers are more likely to blame themselves than the manufacturer if their friends do not find the product stylish and attractive. Therefore, not meeting minimum utilitarian expectations of functionality generates a much more intense negative feelings, such as anger than a less intense feeling such as disappointment or dissatisfactions. The additional multi group analysis of this research shows the same result. Those who are unsatisfactory customers who have the prevention focused goal shows higher relation with WOM, comparing with satisfactory customers. The research findings in this article could have significant implication for the personal selling fields to increase the effectiveness and the efficiency of the sales such that they can develop the sales presentation strategy for the customers. For those who are the hedonic customers may be apt to show more interest to the promotion goal. Therefore it may work to strengthen the design, style or new technology of the products to the hedonic customers. On the contrary for the utilitarian customers, it may work to strengthen the price competitiveness. On the basis of the result from our studies, we demonstrated a correspondence among hedonic versus utilitarian and promotion versus prevention goal, WOM. Similarly, we also found evidence of the moderator effects of satisfaction after use, between the prevention goal and WOM. Even though the prevention goal has the low level of relation to WOM, those who are not satisfied show higher relation to WOM. The relation between the prevention goal and WOM is significantly different according to the satisfaction versus unsatisfaction. In addition, improving the promotion emotions of cheerfulness and excitement and the prevention emotion of confidence and security will further improve customer loyalty. A related potential further research could be to examine whether hedonic versus utilitarian, promotion versus prevention goals improve customer loyalty for services as well. Under the budget and time constraints, designers and managers are often compelling to choose among various attributes. If there is no budget or time constraints, perhaps the best solution is to maximize both hedonic and utilitarian dimension of benefits. However, they have to make trad-off process between various attributes. For the designers and managers have to keep in mind that without hedonic benefit satisfaction of the product it may hard to lead the customers to the customer loyalty.

  • PDF