• Title/Summary/Keyword: management.

Search Result 125,960, Processing Time 0.139 seconds

The Effects of Self-regulatory Resources and Construal Levels on the Choices of Zero-cost Products (자아조절자원 및 해석수준이 공짜대안 선택에 미치는 영향)

  • Lee, Jinyong;Im, Seoung Ah
    • Asia Marketing Journal
    • /
    • v.13 no.4
    • /
    • pp.55-76
    • /
    • 2012
  • Most people prefer to choose zero-cost products they may get without paying any money. The 'zero-cost effect' can be explained with a 'zero-cost model' where consumers attach special values to zero-cost products in a different way from general economic models (Shampanier, Mazar and Ariely 2007). If 2 different products at the regular prices of ₩200 and ₩400 simultaneously offer ₩200 discounts, the prices will be changed to ₩0 and ₩200, respectively. In spite of the same price gap of the two products after the ₩200 discounts, people are much more likely to select the free alternative than the same product at the price of ₩200. Although prior studies have focused on the 'zero-cost effect' in isolation of other factors, this study investigates the moderating effects of a self-regulatory resource and a construal level on the selection of free products. Self-regulatory resources induce people to control or regulate their behavior. However, since self-regulatory resources are limited, they are to be easily depleted when exerted (Muraven, Tice, and Baumeister 1998). Without the resources, consumers tend to become less sensitive to price changes and to spend money more extravagantly (Vohs and Faber 2007). Under this condition, they are also likely to invest less effort on their information processing and to make more intuitive decisions (Pocheptsova, Amir, Dhar, and Baumeister 2009). Therefore, context effects such as price changes and zero cost effects are less likely in the circumstances of resource depletion. In addition, construal levels have profound effects on the ways of information processing (Trope and Liberman 2003, 2010). In a high construal level, people tend to attune their minds to core features and desirability aspects, whereas, in a low construal level, they are more likely to process information based on secondary features and feasibility aspects (Khan, Zhu, and Kalra 2010). A perceived value of a product is more related to desirability whereas a zero cost or a price level is more associated with feasibility. Thus, context effects or reliance on feasibility (for instance, the zero cost effect) will be diminished in a high level construal while those effects may remain in a low level construal. When people make decisions, these 2 factors can influence the magnitude of the 'zero-cost effect'. This study ran two experiments to investigate the effects of self-regulatory resources and construal levels on the selection of a free product. Kisses and Ferrero-Rocher, which were adopted in the prior study (Shampanier et al. 2007) were also used as alternatives in Experiments 1 and 2. We designed Experiment 1 in order to test whether self-regulatory resource depletion will moderate the zero-cost effect. The level of self-regulatory resources was manipulated with two different tasks, a Sudoku task in the depletion condition and a task of drawing diagrams in the non-depletion condition. Upon completion of the manipulation task, subjects were randomly assigned to one of a decision set with a zero-cost option (i.e., Kisses ₩0, and Ferrero-Rocher ₩200) or a set without a zero-cost option (i.e., Kisses ₩200, and Ferrero-Rocher ₩400). A pair of alternatives in the two decision sets have the same price gap of ₩200 between a low-priced Kisses and a high-priced Ferrero-Rocher. Subjects in the no-depletion condition selected Kisses more often (71.88%) over Ferrero-Rocher when Kisses was free than when it was priced at ₩200 (34.88%). However, the zero-cost effect disappeared when people do not have self-regulatory resources. Experiment 2 was conducted to investigate whether constual levels influence the magnitude of the 'zero-cost effect'. To manipulate construal levels, 4 different 'why (in the high construal level condition)' or 'how (in the low construal level condition)' questions about health management were asked. They were presented with 4 boxes connected with downward arrows. In a box at the top, there was one question, 'Why do I maintain good physical health?' or 'How do I maintain good physical health?' Subjects inserted a response to the question of why or how they would maintain good physical health. Similar tasks were repeated for the 2nd, 3rd, and 4th responses. After the manipulation task, subjects were randomly assigned either to a decision set with a zero-cost option, or to a set without it, as in Experiment 1. When a low construal level is primed with 'how', subjects chose free Kisses (60.66%) more often over Ferrero-Rocher than they chose ₩200 Kisses (42.19%) over ₩400 FerreroRocher. On contrast, the zero-cost effect could not be observed any longer when a high construal level is primed with 'why'.

  • PDF

An Exploratory Study on Customers' Individual Factors on Waiting Experience (고객의 개인적 요소가 대기시간 경험에 미치는 영향에 대한 탐색적 연구)

  • Kim, Juyoung;Yoo, Bomi
    • Asia Marketing Journal
    • /
    • v.12 no.1
    • /
    • pp.1-30
    • /
    • 2010
  • Customers often experience waiting for buying service. Managing customers' waiting time is important for service providers since customers who are dissatisfied with waiting, secede from a service place at last. Not a few studies have been done to solve waiting time problem and improve customers' waiting experience. Hui & Tse(1996) identify evaluation factors in customers' behavioral mechanism as customers wait. That is, customers experience perceived waiting time, waiting acceptability and emotional response to the wait when they wait. Since customers evaluate the wait using these factors, service provider should manage these factors in order to minimize customers' dissatisfaction. Therefore, this study explores that evaluation factors of waiting are influenced by customers' situational and experiential characteristics, which include customer loyalty, transaction importance for customer and waiting expectation level. Those situational and experiential characteristics are usually given to service providers so they can't control these at waiting point. The major findings derived from two exploratory studies can be summarized as follows. First, according to the result from the study 1 (restaurant setting), customers' transaction importance has the greatest positive influence on waiting experience. The results show restaurant service provider could prevent customers' separation effectively through strategies which raise customers' transaction importance, like giving special coupons for important events. Second, in study 2 (amusement part setting) customer loyalty has large positive impact on waiting experience as well as transaction importance. This results show that service provider could minimize customers' dissatisfaction using strategies which raise customer loyalty continuously. This results show customer perceives waiting experience differently according to characteristics of service place and service itself. Therefore, service provider should grasp the unique customers' situational and experiential characters for each service and service place. It could provide an effective strategy for waiting time management. Third, the study finds transaction importance and waiting expectation level have direct influence customers' waiting experience as independent variables, while existing studies treated them as moderators. Customer loyalty which has not been incorporated in previous waiting time research is known to affect waiting experience. It suggests that marketing strategy which builds up customer loyalty for long period of time is also quite effective, compared to short term tactics to help customers endure waiting time. Fourth, this study reveals the importance of actual waiting time along with perceived waiting time. So far most studies only focus on customers' perceived waiting time. Especially, this study incorporates the concept of patient limit on waiting time to investigate effect of actual waiting time. The results show that there were various responses to the wait depending on how actual waiting time exceeds individual's patent limit on waiting time or not, even though customers wait about the same period of time. Finally, using structural equation model, conceptual path between behavioral responses is verified. As customer perceives waiting time, then she decides whether she can endure it or not, and then her emotional response occurs. This result are somewhat different from Hui & Tse(1996)'s study. The study also includes theoretical contributions as well as practical implications.

  • PDF

Soil Physical Properties of Arable Land by Land Use Across the Country (토지이용별 전국 농경지 토양물리적 특성)

  • Cho, H.R.;Zhang, Y.S.;Han, K.H.;Cho, H.J.;Ryu, J.H.;Jung, K.Y.;Cho, K.R.;Ro, A.S.;Lim, S.J.;Choi, S.C.;Lee, J.I.;Lee, W.K.;Ahn, B.K.;Kim, B.H.;Kim, C.Y.;Park, J.H.;Hyun, S.H.
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.3
    • /
    • pp.344-352
    • /
    • 2012
  • Soil physical properties determine soil quality in aspect of root growth, infiltration, water and nutrient holding capacity. Although the monitoring of soil physical properties is important for sustainable agricultural production, there were few studies. This study was conducted to investigate the condition of soil physical properties of arable land according to land use across the country. The work was investigated on plastic film house soils, upland soils, orchard soils, and paddy soils from 2008 to 2011, including depth of topsoil, bulk density, hardness, soil texture, and organic matter. The average physical properties were following; In plastic film house soils, the depth of topsoil was 16.2 cm. For the topsoils, hardness was 9.0 mm, bulk density was 1.09 Mg $m^{-3}$, and organic matter content was 29.0 g $kg^{-1}$. For the subsoils, hardness was 19.8 mm, bulk density was 1.32 Mg $m^{-3}$, and organic matter content was 29.5 g $kg^{-1}$; In upland soils, depth of topsoil was 13.3 cm. For the topsoils, hardness was 11.3 mm, bulk density was 1.33 Mg $m^{-3}$, and organic matter content was 20.6 g $kg^{-1}$. For the subsoils, hardness was 18.8 mm, bulk density was 1.52 Mg $m^{-3}$, and organic matter content was 13.0 g $kg^{-1}$. Classified by the types of crop, soil physical properties were high value in a group of deep-rooted vegetables and a group of short-rooted vegetables soil, but low value in a group of leafy vegetables soil; In orchard soils, the depth of topsoil was 15.4 cm. For the topsoils, hardness was 16.1 mm, bulk density was 1.25 Mg $m^{-3}$, and organic matter content was 28.5 g $kg^{-1}$. For the subsoils, hardness was 19.8 mm, bulk density was 1.41 Mg $m^{-3}$, and organic matter content was 15.9 g $kg^{-1}$; In paddy soils, the depth of topsoil was 17.5 cm. For the topsoils, hardness was 15.3 mm, bulk density was 1.22 Mg $m^{-3}$, and organic matter content was 23.5 g $kg^{-1}$. For the subsoils, hardness was 20.3 mm, bulk density was 1.47 Mg $m^{-3}$, and organic matter content was 17.5 g $kg^{-1}$. The average of bulk density was plastic film house soils < paddy soils < orchard soils < upland soils in order, according to land use. The bulk density value of topsoils is mainly distributed in 1.0~1.25 Mg $m^{-3}$. The bulk density value of subsoils is mostly distributed in more than 1.50, 1.35~1.50, and 1.0~1.50 Mg $m^{-3}$ for upland and paddy soils, orchard soils, and plastic film house soils, respectively. Classified by soil textural family, there was lower bulk density in clayey soil, and higher bulk density in fine silty and sandy soil. Soil physical properties and distribution of topography were different classified by the types of land use and growing crops. Therefore, we need to consider the types of land use and crop for appropriate soil management.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

A Study on the long-term Hemodialysis patient중s hypotension and preventation from Blood loss in coil during the Hemodialysis (장기혈액투석환자의 투석중 혈압하강과 Coil내 혈액손실 방지를 위한 기초조사)

  • 박순옥
    • Journal of Korean Academy of Nursing
    • /
    • v.11 no.2
    • /
    • pp.83-104
    • /
    • 1981
  • Hemodialysis is essential treatment for the chronic renal failure patient's long-term cure and for the patient management before and after kidney transplantation. It sustains the endstage renal failure patient's life which didn't get well despite strict regimen and furthermore it becomes an essential treatment to maintain civil life. Bursing implementation in hemodialysis may affect the significant effect on patient's life. The purpose of this study was to obtain the basic data to solve the hypotension problem encountable to patient and the blood loss problem affecting hemodialysis patient'a anemic states by incomplete rinsing of blood in coil through all process of hemodialysis. The subjects for this study were 44 patients treated hemodialysis 691 times in the hemodialysis unit, The .data was collected at Gang Nam 51. Mary's Hospital from January 1, 1981 to April 30, 1981 by using the direct observation method and the clinical laboratory test for laboratory data and body weight and was analysed by the use of analysis of Chi-square, t-test and anlysis of varience. The results obtained an follows; A. On clinical laboratory data and other data by dialysis Procedure. The average initial body weight was 2.37 ± 0.97kg, and average body weight after every dialysis was 2.33 ± 0.9kg. The subject's average hemoglobin was 7.05±1.93gm/dl and average hematocrit was 20.84± 3.82%. Average initial blood pressure was 174.03±23,75mmHg and after dialysis was 158.45±25.08mmHg. The subject's average blood ion due to blood sample for laboratory data was 32.78±13.49cc/ month. The subject's average blood replacement for blood complementation was 1.31 ±0.88 pint/ month for every patient. B. On the hypotensive state and the coping approaches occurrence rate of hypotension was 28.08%. It was 194 cases among 691 times. 1. In degrees of initial blood pressure, the most 36.6% was in the group of 150-179mmHg, and in degrees of hypotension during dialysis, the most 28.9% in the group of 40-50mmHg, especially if the initial blood pressure was under 180mmHg, 59.8% clinical symptoms appeared in the group of“above 20mmHg of hypotension”. If initial blood pressure was above 180mmHg, 34.2% of clinical symptoms were appeared in the group of“above 40mmHg of hypotension”. These tendencies showed the higher initial blood pressure and the stronger degree of hypotension, these results showed statistically singificant differences. (P=0.0000) 2. Of the occuring times of hypotension,“after 3 hrs”were 29.4%, the longer the dialyzing procedure, the stronger degree of hypotension ann these showed statistically significant differences. (P=0.0142). 3. Of the dispersion of symptoms observed, sweat and flush were 43.3%, and Yawning, and dizziness 37.6%. These were the important symptoms implying hypotension during hemodialysis accordingly. Strages of procedures in coping with hypotension were as follows ; 45.9% were recovered by reducing the blood flow rate from 200cc/min to 1 00cc/min, and by reducing venous pressure to 0-30mmHg. 33.51% were recovered by controling (adjusting) blood flow rate and by infusion of 300cc of 0,9% Normal saline. 4.1% were recovered by infusion of over 300cc of 0.9% normal saline. 3.6% by substituting Nor-epinephiine, 5.7% by substituting blood transfusion, and 7,2% by substituting Albumin were recovered. And the stronger the degree of symptoms observed in hypotention, the more the treatments required for recovery and these showed statistically significant differences (P=0.0000). C. On the effects of the changes of blood pressure and osmolality by albumin and hemofiltration. 1. Changes of blood pressure in the group which didn't required treatment in hypotension and the group required treatment, were averaged 21.5mmHg and 44.82mmHg. So the difference in the latter was bigger than the former and these showed statistically significant difference (P=0.002). On the changes of osmolality, average mean were 12.65mOsm, and 17.57mOsm. So the difference was bigger in the latter than in the former but these not showed statistically significance (P=0.323). 2. Changes of blood pressure in the group infused albumin and in the group didn't required treatment in hypotension, were averaged 30mmHg and 21.5mmHg. So there was no significant differences and it showed no statistical significance (P=0.503). Changes of osmolality were averaged 5.63mOsm and 12.65mOsm. So the difference was smaller in the former but these was no stitistical significance (P=0.287). Changes of blood pressure in the group infused Albumin and in the group required treatment in hypotension were averaged 30mmHg and 44.82mmHg. So the difference was smaller in the former but there is no significant difference (P=0.061). Changes of osmolality were averaged 8.63mOsm, and 17.59mOsm. So the difference were smaller in the former but these not showed statistically significance (P=0.093). 3. Changes of blood pressure in the group iutplemented hemofiltration and in the Uoup didn't required treatment in hypotension were averaged 22mmHg and 21.5mmHg. So there was no significant differences and also these showed no statistical significance (P=0.320). Changes of osmolality were averaged 0.4mOsm and 12.65mOsm. So the difference was smaller in the former but these not showed statistical significance(P=0.199). Changes of blood pressure in the group implemented hemofiltration and in the group required treatment in hypotension were averaged 22mmHg and 44.82mmHg. So the difference was smatter in the former and these showed statistically significant differences (P=0.035). Changes of osmolality were averaged 0.4mOsm and 17.59mOsm. So the difference was smaller in the former but these not showed statistical significance (P=0.086). D. On the changes of body weight, and blood pressure, between the group of hemofiltration and hemodialysis. 1, Changes of body weight in the group implemented hemofiltration and hemodialysis were averaged 3.340 and 3.320. So there was no significant differences and these showed no statistically significant difference, (P=0.185) but standard deviation of body weight averaged in comparison with standard difference of body weight was statistically significant difference (P=0.0000). Change of blood Pressure in the group implemented hemofiltration and hemodialysis were averaged 17.81mmHg and 19.47mmHg. So there was no significant differences and these showed no statistically significant difference (P=0.119), But in comparison with standard deviation about difference of blood pressure was statistically significant difference. (P=0.0000). E. On the blood infusion method in coil after hemodialysis and residual blood losing method in coil. 1, On comparing and analysing Hct of residual blood in coil by factors influencing blood infusion method. Infusion method of saline 200cc reduced residual blood in coil after the quantitative comparison of Saline Occ, 50cc, 100cc, 200cc and the differences showed statistical significance (p < 0.001). Shaking Coil method reduced residual blood in Coil in comparison of Shaking Coil method and Non-Shaking Coil method this showed statistically significant difference (P < 0.05). Adjusting pressure in Coil at OmmHg method reduced residual blood in Coil in comparison of adjusting pressure in Coil at OmmHg and 200mmHg, and this showed statistically significant difference (P < 0.001). 2. Comparing blood infusion method divided into 10 methods in Coil with every factor respectively, there was seldom difference in group of choosing Saline 100cc infusion between Coil at OmmHg. The measured quantity of blood loss was averaged 13.49cc. Shaking Coil method in case of choosing saline 50cc infusion while adjusting pressure in coil at OmmHg was the most effective to reduce residual blood. The measured quantity of blood loss was averaged 15.18cc.

  • PDF

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Studies on the Environmental Factors Affecting the Cocoon Crops in Summer and Autumn in Korea (한국의 하추잠작 안정을 위한 환경요인에 관한 연구)

  • 이상풍
    • Journal of Sericultural and Entomological Science
    • /
    • v.16 no.2
    • /
    • pp.1-34
    • /
    • 1974
  • These experiments pertain to various factors influencing the quantitative characters of cocoon crops in summer and early autumn seasons. Initially, in order to establish the possible ways of the silkworm rearing more than three times a year in Korea, the author attempted to get further information about the various factors affecting the cocoon crop in every silkworm rearing season. The trials were conducted eleven times a year at four places for three years. The field trial was conducted with 19 typical sericultural farmers who had been surveyed. At the same time the author statistically analyzed the various factors in close relation to tile cocoon crop in autumn season. The effect of guidance on 40 sericultural farmers was analyzed, comparing higher level farmers with lower level farmers ; and the author surveyed 758 non-guided farmers near the guided farmers during both spring and autumn seasons. In addition, another trial on the seasonal change of leaf quality was attempted with artificial diets prepared with leaves grown in each season. It was found that related factors to cocoon crops in summer and early autumn seasons appeared to be leaf quality, and temperature for young and grown larvae. A 2$^4$ factorial experiment was designed in summer season, and another design with one more level of varied temperature or hard leaf added to a 24 factorial experiment was conducted in early autumn. The experimental results can be summarized: 1. Study on the cocoon crops in the different rearing seasons 1) It was shown that earlier brushing of silkworm generally produced the most abundant cocoon crop in spring season, and earlier or later than the conventional brushing season, especially earlier brushing was unfavorable for the abundant cocoon crop in autumn season. 2) The cocoon crop was affected by the rearing season, and decreases in order of sire with spring, autumn, late autumn, summer and early autumn seasons. 3) It was Proved that ordinary rearing and branch rearing were possibles 4 times a year ; in the 1st, 3rd, 8th, and 10th brushing season. But the 11th brushing season was more favorable for the most abundant cocoon crop of branch rearing, instead of the 10th brushing season with ordinary rearing. 2. Study on the main factors affecting the cocoon crop in autumn season 1) Accumulated pathogens were a lethal factor leading to a bad cocoon crop through neglect of disinfection of rearing room and instruments. 2) Additional factors leading to a poor cocoon crop were unfavorable for rearing temperature and humidity, dense population, poor choice of moderately ripened leaf, and poor feeding techniques. However, it seemed that there was no relationship between the cocoon crop and management of farm. 3) The percentage of cocoon shell seemed to be mostly affected by leaf quality, and secondarily affected by the accumulation of pathogens. 3. Study on the effect of guidance on rearing techniques 1) The guided farms produced an average yearly yield of 29.0kg of cocoons, which varied from 32.3kg to 25.817g of cocoon yield per box in spring versus autumn, respectively. Those figures indicated an annual average increase of 26% of cocoon yield over yields of non-guided farmers. An increase of 20% of cocoon yield in spring and 35% of cocoon yield in autumn were responsible. 2) On guided farms 77.1 and 83.7% of total cocoon yields in the spring and autumn seasons, respectively, exceeded 3rd grade. This amounted to increases of 14.1 and 11.3% in cocoon yield and quality over those of non-guided farms. 3) The average annual cocoon yield on guided farms was 28.9kg per box, based on a range of 31.2kg to 26.9kg per box in spring and autumn seasons, respectively. This represented an 8% increase in cocoon yield on farms one year after guidance, as opposed to non-guided farms. This yield increase was due to 3 and 16% cocoon yield increases in spring and autumn crops. 4) Guidance had no effect on higher level farms, but was responsible for 19% of the increases in production on lower level farms. 4. Study on the seasonal change of leaf quality 1) In tests with grown larvae, leaves of tile spring crop incorporated in artificial diets produced the best cocoon crop; followed by leaves of the late autumn, summer, autumn, and early autumn crops. 2) The cocoon crop for young larvae as well as for grown larvae varied with the season of leaf used. 5. Study on factors affecting the cocoon crops in summer and early autumn A. Early autumn season 1) Survival rate and cocoon yield were significantly decreased at high rearing temperatures for young larvae 2) Survival rate, cocoon yield, and cocoon quality were adversely affected by high rearing temperatures for grown larvae. Therefore increases of cocoon quantity and improvement of cocoon quality are dependent on maintaining optimum temperatures. 3) Decreases in individual cocoon weight and longer larval periods resulted with feeding of soft leaf and hard leaf to young larvae, but the survival rate, cocoon yield and weight of cocoon shell were not influenced. 4) Cocoon yield and cocoon quality were influenced by feeding of hard leaf to grown larvae, but survival rate was not influenced by the feeding of soft leaf and hard leaf. 5) When grown larvae were inevitably raised at varied temperatures, application of varied temperature in the raising of both young and grown larvae was desirable. Further research concerning this matter must be considered. B. Summer season 1) Cocoon yield and single cocoon weight were decreased at high temperatures for young larvae and survival rate was also affected. 2) Cocoon yield, survival rate. and cocoon quality were considerably decreased at high rearing temperatures for grown larval stages.

  • PDF

Study on the Effects of Shop Choice Properties on Brand Attitudes: Focus on Six Major Coffee Shop Brands (점포선택속성이 브랜드 태도에 미치는 영향에 관한 연구: 6개 메이저 브랜드 커피전문점을 중심으로)

  • Yi, Weon-Ho;Kim, Su-Ok;Lee, Sang-Youn;Youn, Myoung-Kil
    • Journal of Distribution Science
    • /
    • v.10 no.3
    • /
    • pp.51-61
    • /
    • 2012
  • This study seeks to understand how the choice of a coffee shop is related to a customer's loyalty and which characteristics of a shop influence this choice. It considers large-sized coffee shops brands whose market scale has gradually grown. The users' choice of shop is determined by price, employee service, shop location, and shop atmosphere. The study investigated the effects of these four properties on the brand attitudes of coffee shops. The effects were found to vary depending on users' characteristics. The properties with the largest influence were shop atmosphere and shop location Therefore, the purpose of the study was to examine the properties that could help coffee shops get loyal customers, and the choice properties that could satisfy consumers' desires The study examined consumers' perceptions of shop properties at selection of coffee shop and the difference between perceptual difference and coffee brand in order to investigate customers' desires and needs and to suggest ways that could supply products and service. The research methodology consisted of two parts: normative and empirical research, which includes empirical analysis and statistical analysis. In this study, a statistical analysis of the empirical research was carried out. The study theoretically confirmed the shop choice properties by reviewing previous studies and performed an empirical analysis including cross tabulation based on secondary material. The findings were as follows: First, coffee shop choice properties varied by gender. Price advantage influenced the choice of both men and women; men preferred nearer coffee shops where they could buy coffee easily and more conveniently than women did. The atmosphere of the coffee shop had the greatest influence on both men and women, and shop atmosphere was thought to be the most important for age analysis. In the past, customers selected coffee shops solely to drink coffee. Now, they select the coffee shop according to its interior, menu variety, and atmosphere owing to improved quality and service of coffee shop brands. Second, the prices of the brands did not vary much because the coffee shops were similarly priced. The service was thought to be more important and to elevate service quality so that price and employee service and other properties did not have a great influence on shop choice. However, those working in the farming, forestry, fishery, and livestock industries were more concerned with the price than the shop atmosphere. College and graduate school students were also affected by inexpensive price. Third, shop choice properties varied depending on income. The shop location and shop atmosphere had a greater influence on shop choice. The customers in an income bracket of less than 2 million won selected low-price coffee shops more than those earning 6 million won or more. Therefore, price advantage had no relation with difference in income. The higher income group was not affected by employee service. Fourth, shop choice properties varied depending on place. For instance, customers at Ulsan were the most affected by the price, and the ones at Busan were the least affected. The shop location had the greatest influence among all of the properties. Among the places surveyed, Gwangju had the least influence. The alternate use of space in a coffee shop was thought to be important in all the cities under consideration. The customers at Ulsan were not affected by employee service, and they selected coffee shops according to quality and preference of shop atmosphere. Lastly, the price factor was found to be a little higher than other factors when customers frequently selected brands according to shop properties. Customers at Gwangju reacted to discounts more than those in other cities did, and the former gave less priority to the quality and taste of coffee. Brand preference varied depending on coffee shop location. Customers at Busan selected brands according to the coffee shop location, and those at Ulsan were not influenced by employee kindness and specialty. The implications of this study are that franchise coffee shop businesses should focus on customers rather than aggressive marketing strategies that increase the number of coffee shops. Thus, they should create an environment with a good atmosphere and set up coffee shops in places that customers have good access to. This study has some limitations. First, the respondents were concentrated in metropolitan areas. Secondary data showed that the number of respondents at Seoul was much more than that at Gyeonggi-do. Furthermore, the number of respondents at Gyeonggi-do was much more than those at the six major cities in the nation. Thus, the regional sample was not representative enough of the population. Second, respondents' ratio was used as a measurement scale to test the perception of shop choice properties and brand preference. The difficulties arose when examining the relation between these properties and brand preference, as well as when understanding the difference between groups. Therefore, future research should seek to address some of the shortcomings of this study: If the coffee shops are being expanded to local areas, then a questionnaire survey of consumers at small cities in local areas shall be conducted to collect primary material. In particular, variables of the questionnaire survey shall be measured using Likert scales in order to include perception on shop choice properties, brand preference, and repurchase. Therefore, correlation analysis, multi-regression, and ANOVA shall be used for empirical analysis and to investigate consumers' attitudes and behavior in detail.

  • PDF

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

The Marketing Effect of Loyalty Program on Relational Market Behavior : Focusing in Franchise Membership Fitness Club (로열티 프로그램이 고객 참여와 소비자-브랜드 관계에 기초한 관계형 시장 행동에 미치는 영향 : 프랜차이즈 회원제 휘트니스클럽을 대상으로)

  • Yoon, Kyung-Goo;Shin, Geon-Cheol
    • Journal of Distribution Research
    • /
    • v.17 no.2
    • /
    • pp.1-28
    • /
    • 2012
  • I. Introduction : The purpose of this study is to test empirically hypothetical causality among constructs used in previous studies to build the model of relational market behavior on customers' participation and consumer-brand relationship after introducing theories of relationship marketing, loyalty program, consumer-brand relationship, customers' participation in service marketing as previous studies with regard to relational market behavior, which Bagozzi(1995) and Peterson(1995) commented on constructs and definition suggested by Sheth and Parvatiyar (1995). For this purpose, loyalty program by the service provider, customers' participation and consumer-brand relationship as preceding variables explain relational market behavior defined by Sheth and Parvatiyar(1995). This study proposes that loyalty program as a tool of relationship marketing will be effective in that consumers' participation in marketing relationship results in a narrow range of choice(Sheth and Parvatiyar, 1995) because consumers think that their participation motive result in benefits(Peterson, 1995). Also, it is proposed that the quality of consumer-brand relationship explain the performance of relationship as well as the intermediary effect because the loyalty program could be evaluated based on relationship with customers. We reviewed the variables with regard to performance of relationship based on relation maintain in marketing literature, and then tested our hypotheses related to several performance variables including loyalty and intention of relation maintain based on the previous studies and constructs(Bendapudi and Berry, 1997 ; Bettencourt, 1997 ; Palmatier, Dant, Grewal and Evans, 2006 ; You Jae Yi and Soo Jin Lee, 2006). II. Study Model : Analyses about hypothetical causality were proceeded. The marketing effect of loyalty program on relational market behavior was empirically tested in study regarding a service provider. The research model in according to the path hypotheses (loyalty program ${\rightarrow}$ customers' participation ${\rightarrow}$ consumer-brand relationship ${\rightarrow}$ relational market behavior and loyalty program ${\rightarrow}$ consumer-brand relationship, and loyalty program ${\rightarrow}$ relational market behavior and customers' participation ${\rightarrow}$ consumer-brand relationship, and customers' participation ${\rightarrow}$ relational market behavior) proceeded as an activity for customer relation management was suggested. The main purpose of study is to see if relational market behavior could be brought as a result of developing relationship between consumers and a corporate into being stronger and more valuable when a corporate or a service provider try aggressively to build the relationship with customers (Bettencourt, 1997; Palmatier, Dant, Grewal and Evans, 2006; Sheth and Parvatiyar, 1995). III. Conclusion : The results of research into the membership fitness club, one of service areas with high level of customer participation (Bitner, Faranda, Hubbert and Zeithaml, 1997; Chase, 1978; Kelley, Donnelly, Jr. and Skinner, 1990) are as follows: First, causalities in according to path hypotheses were tested, after the preceding variables affecting relational market behavior and conceptual frame were suggested. In study, all hypotheses were supported as expected. This result confirms the proposition suggested by Sheth and Parvatiyar(1995), who claimed that intention of consumer and corporate to participate in marketing relationship brings high level of marketing productivity. Also, as a corporate or a service provider try aggressively to build relationship with customers, the relationship between consumers and a corporate can be developed into stronger and more valuable one (Bettencourt, 1997; Palmatier, Dant, Grewal and Evans, 2006). This finding supports the logic of relationship marketing. Second, because the question regarding the path hypothesis of consumer-brand relationship ${\rightarrow}$ relational market behavior are still at issue, the further analyses were conducted. In particular, there existed the mediating effects of consumer-brand relationship toward relational market behavior. Also, multiple regressions were conducted to see if which one strongly influences relational market behavior among specific question items with regard to consumer-brand relationship. As a result, the influence between items composing consumer-brand relationship and ones composing relational market behavior was different. Among items composing consumer-brand relationship, intimacy was an influence of sustaining relationship, word of mouth, and recommendation, intimacy and interdependence were influences of loyalty, intimacy and self-connection were influences of tolerance and advice. Notably, commitment among items measuring consumer-brand relationship had the negative influence with relational market behavior. This means that bringing relational market behavior is not consumer-brand relationship without personal commitment, but effort to build customer relationship like intimacy, interdependence, and self-connection. This finding confirms the results of Breivik and Thorbjornsen(2008). They reported that six variables composing the quality of consumer-brand relationship have higher explanation in regression model directly affecting performance of consumer-brand relationship. As a result of empirical analysis, among the constructs with regard to consumer-brand relationship, intimacy(B=0.512), interdependence(B=0.196), and quality of partner(B=0.153) had the effects on relation maintain. On the contrary, self-connection, love and passion, and commitment had little effect and did not show the statistical significance(p<0.05). On the other hand, intimacy(B=0.668) and interdependence(B=0.181) had the high regression estimates on word of mouth and recommendation. Regarding the effect on loyalty, explanation level of the model was high(R2=0.515), intimacy(0.538), interdependence(0.223), and quality of partner(0.177) showed the statistical significance(p<0.05). Furthermore, intimacy(0.441) had the strong effect as well as self-connection(0.201) and interdependence (0.163) had the effect on tolerance and forgive. And these three variables showed effects even on advice and suggestion, intimacy(0.373), self-connection(0.270), interdependence (0.155) respectively. Third, in study with regard to the positive effect(loyalty program ${\rightarrow}$ customers' participation, loyalty program ${\rightarrow}$ consumer-brand relationship, loyalty program ${\rightarrow}$ relational market behavior, customers' participation ${\rightarrow}$ consumer-brand relationship, customers' participation ${\rightarrow}$ relational market behavior, consumer-brand relationship ${\rightarrow}$ relational market behavior), the path hypothesis of customers' participation ${\rightarrow}$ consumer-brand relationship, was supported. The fact that path hypothesis of customers' participation ${\rightarrow}$ consumer-brand relationship was supported confirms assertion by Bitner(1995), Fournier(1994), Sheth and Parvatiyar(1995) about consumer relationship to participate in marketing relationship.

  • PDF