• Title/Summary/Keyword: Independent Variables

Search Result 4,466, Processing Time 0.033 seconds

The Effect of Price Discount Rate According to Brand Loyalty on Consumer's Acquisition Value and Transaction Value (브랜드애호도에 따른 가격할인율의 차이가 소비자의 획득가치와 거래가치에 미치는 영향)

  • Kim, Young-Ei;Kim, Jae-Yeong;Shin, Chang-Nag
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.4
    • /
    • pp.247-269
    • /
    • 2007
  • In recent years, one of the major reasons for the fierce competition amongst firms is that they strive to increase their own market shares and customer acquisition rate in the same market with similar and apparently undifferentiated products in terms of quality and perceived benefit. Because of this change in recent marketing environment, the differentiated after-sales service and diversified promotion strategies have become more important to gain competitive advantage. Price promotion is the favorite strategy that most retailers use to achieve short-term sales increase, induce consumer's brand switch, in troduce new product into market, and so forth. However, if marketers apply or copy an identical price promotion strategy without considering the characteristic differences in product and consumer preference, it will cause serious problems because discounted price itself could make people skeptical about product quality, and the changes of perceived value might appear differently depending on other factors such as consumer involvement or brand attitude. Previous studies showed that price promotion would certainly increase sales, and the discounted price compared to regular price would enhance the consumer's perceived values. On the other hand, discounted price itself could make people depreciate or skeptical about product quality, and reduce the consumers' positivity bias because consumers might be unsure whether the current price promotion is the retailer's best price offer. Moreover, we cannot say that discounted price absolutely enhances the consumer's perceived values regardless of product category and purchase situations. That is, the factors that affect consumers' value perceptions and buying behavior are so diverse in reality that the results of studies on the same dependent variable come out differently depending on what variable was used or how experiment conditions were designed. Majority of previous researches on the effect of price-comparison advertising have used consumers' buying behavior as dependent variable. In order to figure out consumers' buying behavior theoretically, analysis of value perceptions which influence buying intentions is needed. In addition, they did not combined the independent variables such as brand loyalty and price discount rate together. For this reason, this paper tried to examine the moderating effect of brand loyalty on relationship between the different levels of discounting rate and buyers' value perception. And we provided with theoretical and managerial implications that marketers need to consider such variables as product attributes, brand loyalty, and consumer involvement at the same time, and then establish a differentiated pricing strategy case by case in order to enhance consumer's perceived values properl. Three research concepts were used in our study and each concept based on past researches was defined. The perceived acquisition value in this study was defined as the perceived net gains associated with the products or services acquired. That is, the perceived acquisition value of the product will be positively influenced by the benefits buyers believe they are getting by acquiring and using the product, and negatively influenced by the money given up to acquire the product. And the perceived transaction value was defined as the perception of psychological satisfaction or pleasure obtained from taking advantage of the financial terms of the price deal. Lastly, the brand loyalty was defined as favorable attitude towards a purchased product. Thus, a consumer loyal to a brand has an emotional attachment to the brand or firm. Repeat purchasers continue to buy the same brand even though they do not have an emotional attachment to it. We assumed that if the degree of brand loyalty is high, the perceived acquisition value and the perceived transaction value will increase when higher discount rate is provided. But we found that there are no significant differences in values between two different discount rates as a result of empirical analysis. It means that price reduction did not affect consumer's brand choice significantly because the perceived sacrifice decreased only a little, and customers are satisfied with product's benefits when brand loyalty is high. From the result, we confirmed that consumers with high degree of brand loyalty to a specific product are less sensitive to price change. Thus, using price promotion strategy to merely expect sale increase is not recommendable. Instead of discounting price, marketers need to strengthen consumers' brand loyalty and maintain the skimming strategy. On the contrary, when the degree of brand loyalty is low, the perceived acquisition value and the perceived transaction value decreased significantly when higher discount rate is provided. Generally brands that are considered inferior might be able to draw attention away from the quality of the product by making consumers focus more on the sacrifice component of price. But considering the fact that consumers with low degree of brand loyalty are known to be unsatisfied with product's benefits and have relatively negative brand attitude, bigger price reduction offered in experiment condition of this paper made consumers depreciate product's quality and benefit more and more, and consumer's psychological perceived sacrifice increased while perceived values decreased accordingly. We infer that, in the case of inferior brand, a drastic price-cut or frequent price promotion may increase consumers' uncertainty about overall components of product. Therefore, it appears that reinforcing the augmented product such as after-sale service, delivery and giving credit which is one of the levels consisting of product would be more effective in reality. This will be better rather than competing with product that holds high brand loyalty by reducing sale price. Although this study tried to examine the moderating effect of brand loyalty on relationship between the different levels of discounting rate and buyers' value perception, there are several limitations. This study was conducted in controlled conditions where the high involvement product and two different levels of discount rate were applied. Given the presence of low involvement product, when both pieces of information are available, it is likely that the results we have reported here may have been different. Thus, this research results explain only the specific situation. Second, the sample selected in this study was university students in their twenties, so we cannot say that the results are firmly effective to all generations. Future research that manipulates the level of discount along with the consumer involvement might lead to a more robust understanding of the effects various discount rate. And, we used a cellular phone as a product stimulus, so it would be very interesting to analyze the result when the product stimulus is an intangible product such as service. It could be also valuable to analyze whether the change of perceived value affects consumers' final buying behavior positively or negatively.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

APPROXIMATE ESTIMATION OF RECRUITMENT IN FISH POPULATION UTILIZING STOCK DENSITY AND CATCH (밀도지수와 어획량으로서 수산자원의 가입량을 근사적으로 추정하는 방법)

  • KIM Kee Ju
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.8 no.2
    • /
    • pp.47-60
    • /
    • 1975
  • For the calculation of population parameter and estimation of recruitment of a fish population, an application of multiple regression method was used with some statistical inferences. Then, the differences between the calculated values and the true parameters were discussed. In addition, this method criticized by applying it to the statistical data of a population of bigeye tuna, Thunnus obesus of the Indian Ocean. The method was also applied to the available data of a population of Pacific saury, Cololabis saira, to estimate its recuitments. A stock at t year and t+1 year is, $N_{0,\;t+1}=N_{0,\;t}(1-m_t)-C_t+R_{t+1}$ where $N_0$ is the initial number of fish in a given year; C, number o: fish caught; R, number of recruitment; and M, rate of natural mortality. The foregoing equation is $$\phi_{t+1}=\frac{(1-\varrho^{-z}{t+1})Z_t}{(1-\varrho^{-z}t)Z_{t+1}}-\frac{1-\varrho^{-z}t+1}{Z_{t+1}}\phi_t-a'\frac{1-\varrho^{-z}t+1}{Z_{t+1}}C_t+a'\frac{1-\varrho^{-z}t+1}{Z_{t+1}}R_{t+1}......(1)$$ where $\phi$ is CPUE; a', CPUE $(\phi)$ to average stock $(\bar{N})$ in number; Z, total mortality coefficient; and M, natural mortality coefficient. In the equation (1) , the term $(1-\varrho^{-z}t+1)/Z_{t+1}$s almost constant to the variation of effort (X) there fore coefficients $\phi$ and $C_t$, can be calculated, when R is a constant, by applying the method of multiple regression, where $\phi_{t+1}$ is a dependent variable; $\phi_t$ and $C_t$ are independent variables. The values of Mand a' are calculated from the coefficients of $\phi_t$ and $C_t$; and total mortality coefficient (Z), where Z is a'X+M. By substituting M, a', $Z_t$, and $Z_{t+1}$ to the equation (1) recruitment $(R_{t+1})$ can be calculated. In this precess $\phi$ can be substituted by index of stock in number (N'). This operational procedures of the method of multiple regression can be applicable to the data which satisfy the above assumptions, even though the data were collected from any chosen year with similar recruitments, though it were not collected from the consecutive years. Under the condition of varying effort the data with such variation can be treated effectively by this method. The calculated values of M and a' include some deviation from the population parameters. Therefore, the estimated recruitment (R) is a relative value instead of all absolute one. This method of multiple regression is also applicable to the stock density and yield in weight instead of in number. For the data of the bigeye tuna of the Indian Ocean, the values of estimated recruitment (R) calculated from the parameter which is obtained by the present multiple regression method is proportional with an identical fluctuation pattern to the values of those derived from the parameters M and a', which were calculated by Suda (1970) for the same data. Estimated recruitments of Pacific saury of the eastern coast of Korea were calculated by the present multiple regression method. Not only spring recruitment $(1965\~1974)$ but also fall recruitment $(1964\~1973)$ was found to fluctuate in accordance with the fluctuations of stock densities (CPUE) of the same spring and fall, respectively.

  • PDF

The Impact of Conflict and Influence Strategies Between Local Korean-Products-Selling Retailers and Wholesalers on Performance in Chinese Electronics Distribution Channels: On Moderating Effects of Relational Quality (중국 가전유통경로에서 한국제품 현지 판매업체와 도매업체간 갈등 및 영향전략이 성과에 미치는 영향: 관계 질의 조절효과)

  • Chun, Dal-Young;Kwon, Joo-Hyung;Lee, Guo-Ming
    • Journal of Distribution Research
    • /
    • v.16 no.3
    • /
    • pp.1-32
    • /
    • 2011
  • I. Introduction: In Chinese electronics industry, the local wholesalers are still dominant but power is rapidly swifting from wholesalers to retailers because in recent foreign big retailers and local mass merchandisers are growing fast. During such transient period, conflicts among channel members emerge important issues. For example, when wholesalers who have more power exercise influence strategies to maintain status, conflicts among manufacturer, wholesaler, and retailer will be intensified. Korean electronics companies in China need differentiated channel strategies by dealing with wholesalers and retailers simultaneously to sell more Korean products in competition with foreign firms. For example, Korean electronics firms should utilize 'guanxi' or relational quality to form long-term relationships with whloesalers instead of power and conflict issues. The major purpose of this study is to investigate the impact of conflict, dependency, and influence strategies between local Korean-products-selling retailers and wholesalers on performance in Chinese electronics distribution channels. In particular, this paper proposes effective distribution strategies for Korean electronics companies in China by analyzing moderating effects of 'Guanxi'. II. Literature Review and Hypotheses: The specific purposes of this study are as follows. First, causes of conflicts between local Korean-products-selling retailers and wholesalers are examined from the perspectives of goal incongruence and role ambiguity and then effects of these causes are found out on perceived conflicts of local retailers. Second, the effects of dependency of local retailers upon wholesalers are investigated on local retailers' perceived conflicts. Third, the effects of non-coercive influence strategies such as information exchange and recommendation and coercive strategies such as threats and legalistic pleas exercised by wholesalers are explored on perceived conflicts by local retailers. Fourth, the effects of level of conflicts perceived by local retailers are verified on local retailers' financial performance and satisfaction. Fifth, moderating effects of relational qualities, say, 'quanxi' between wholesalers and retailers are analyzed on the impact of wholesalers' influence strategies on retailers' performances. Finally, moderating effects of relational qualities are examined on the relationship between conflicts and performance. To accomplish above-mentioned research objectives, Figure 1 and the following research hypotheses are proposed and verified. III. Measurement and Data Analysis: To verify the proposed research model and hypotheses, data were collected from 97 retailers who are selling Korean electronic products located around Central and Southern regions in China. Covariance analysis and moderated regression analysis were employed to validate hypotheses. IV. Conclusion: The following results were drawn using structural equation modeling and hierarchical moderated regression. First, goal incongruence perceived by local retailers significantly affected conflict but role ambiguity did not. Second, consistent with conflict spiral theory, the level of conflict decreased when retailers' dependency increased toward wholesalers. Third, noncoercive influence strategies such as information exchange and recommendation implemented by wholesalers had significant effects on retailers' performance such as sales and satisfaction without conflict. On the other hand, coercive influence strategies such as threat and legalistic plea had insignificant effects on performance in spite of increasing the level of conflict. Fourth, 'guanxi', namely, relational quality between local retailers and wholesalers showed unique effects on performance. In case of noncoercive influence strategies, 'guanxi' did not play a role of moderator. Rather, relational quality and noncoercive influence strategies can serve as independent variables to enhance performance. On the other hand, when 'guanxi' was well built due to mutual trust and commitment, relational quality as a moderator can positively function to improve performance even though hostile, coercive influence strategies were implemented. Fifth, 'guanxi' significantly moderated the effects of conflict on performance. Even if conflict arises, local retailers who form solid relational quality can increase performance by dealing with dysfunctional conflict synergistically compared with low 'quanxi' retailers. In conclusion, this study verified the importance of relational quality via 'quanxi' between local retailers and wholesalers in Chinese electronic industry because relational quality could cross out the adverse effects of coercive influence strategies and conflict on performance.

  • PDF

The Effect of Structured Information on the Sleep Amount of Patients Undergoing Open Heart Surgery (계획된 간호 정보가 수면량에 미치는 영향에 관한 연구 -개심술 환자를 중심으로-)

  • 이소우
    • Journal of Korean Academy of Nursing
    • /
    • v.12 no.2
    • /
    • pp.1-26
    • /
    • 1982
  • The main purpose of this study was to test the effect of the structured information on the sleep amount of the patients undergoing open heart surgery. This study has specifically addressed to the Following two basic research questions: (1) Would the structed in formation influence in the reduction of sleep disturbance related to anxiety and Physical stress before and after the operation? and (2) that would be the effects of the structured information on the level of preoperative state anxiety, the hormonal change, and the degree of behavioral change in the patients undergoing an open heart surgery? A Quasi-experimental research was designed to answer these questions with one experimental group and one control group. Subjects in both groups were matched as closely as possible to avoid the effect of the differences inherent to the group characteristics, Baseline data were also. collected on both groups for 7 days prior to the experiment and found that subjects in both groups had comparable sleep patterns, trait anxiety, hormonal levels and behavioral level. A structured information as an experimental input was given to the subjects in the experimental group only. Data were collected and compared between the experimental group and the control group on the sleep amount of the consecutive pre and post operative days, on preoperative state anxiety level, and on hormonal and behavioral changes. To test the effectiveness of the structured information, two main hypotheses and three sub-hypotheses were formulated as follows; Main hypothesis 1: Experimental group which received structured information will have more sleep amount than control group without structured information in the night before the open heart surgery. Main hypothesis 2: Experimental group with structured information will have more sleep, amount than control group without structured information during the week following the open heart surgery Sub-hypothesis 1: Experimental group with structured information will be lower in the level of State anxiety than control group without structured information in the night before the open heart surgery. Sub-hypothesis 2 : Experimental group with structured information will have lower hormonal level than control group without stuctured information on the 5th day after the open heart surgery Sub-hypothesis 3: Experimental group with structured information will be lower in the behavioral change level than control group without structured information during the week after the open heart surgery. The research was conducted in a national university hospital in Seoul, Korea. The 53 Subjects who participated in the study were systematically divided into experimental group and control group which was decided by random sampling method. Among 53 subjects, 26 were placed in the experimental group and 27 in the control group. Instruments; (1) Structed information: Structured information as an independent variable was constructed by the researcher on the basis of Roy's adaptation model consisting of physiologic needs, self-concept, role function and interdependence needs as related to the sleep and of operational procedures. (2) Sleep amount measure: Sleep amount as main dependent variable was measured by trained nurses through observation on the basis of the established criteria, such as closed or open eyes, regular or irregular respiration, body movement, posture, responses to the light and question, facial expressions and self report after sleep. (3) State anxiety measure: State Anxiety as a sub-dependent variable was measured by Spi-elberger's STAI Anxiety scale, (4) Hormornal change measure: Hormone as a sub-dependent variable was measured by the cortisol level in plasma. (5) Behavior change measure: Behavior as a sub-dependent variable was measured by the Behavior and Mood Rating Scale by Wyatt. The data were collected over a period of four months, from June to October 1981, after the pretest period of two months. For the analysis of the data and test for the hypotheses, the t-test with mean differences and analysis of covariance was used. The result of the test for instruments show as follows: (1) STAI measurement for trait and state anxiety as analyzed by Cronbachs alpha coefficient analysis for item analysis and reliability showed the reliability level at r= .90 r= .91 respectively. (2) Behavior and Mood Rating Scale measurement was analyzed by means of Principal Component Analysis technique. Seven factors retained were anger, anxiety, hyperactivity, depression, bizarre behavior, suspicious behavior and emotional withdrawal. Cumulative percentage of each factor was 71.3%. The result of the test for hypotheses show as follows; (1) Main hypothesis, was not supported. The experimental group has 282 minutes of sleep as compared to the 255 minutes of sleep by the control group. Thus the sleep amount was higher in experimental group than in control group, however, the difference was not statistically significant at .05 level. (2) Main hypothesis 2 was not supported. The mean sleep amount of the experimental group and control group were 297 minutes and 278 minutes respectively Therefore, the experimental group had more sleep amount as compared to the control group, however, the difference was not statistically significant at .05 level. Thus, the main hypothesis 2 was not supported. (3) Sub-hypothesis 1 was not supported. The mean state anxiety of the experimental group and control group were 42.3, 43.9 in scores. Thus, the experimental group had slightly lower state anxiety level than control group, howe-ver, the difference was not statistically significant at .05 level. (4) Sub-hypothesis 2 was not supported. . The mean hormonal level of the experimental group and control group were 338 ㎍ and 440 ㎍ respectively. Thus, the experimental group showed decreased hormonal level than the control group, however, the difference was not statistically significant at .05 level. (5) Sub-hypothesis 3 was supported. The mean behavioral level of the experimental group and control group were 29.60 and 32.00 respectively in score. Thus, the experimental group showed lower behavioral change level than the control group. The difference was statistically significant at .05 level. In summary, the structured information did not influence the sleep amount, state anxiety or hormonal level of the subjects undergoing an open heart surgery at a statistically significant level, however, it showed a definite trends in their relationships, not least to mention its significant effect shown on behavioral change level. It can further be speculated that a great degree of individual differences in the variables such as sleep amount, state anxiety and fluctuation in hormonal level may partly be responsible for the statistical insensitivity to the experimentation.

  • PDF

Short-Term Efficacy of Steroid and Immunosuppressive Drugs in Patients with Idiopathic Pulmonary Fibrosis and Pre-treatment Factors Associated with Favorable Response (특발성폐섬유화증에서 스테로이드와 면역억제제의 단기 치료효과 및 치료반응 예측인자)

  • Kang, Kyeong-Woo;Park, Sang-Joon;Koh, Young-Min;Lee, Sang-Pyo;Suh, Gee-Young;Chung, Man-Pyo;Han, Jung-Ho;Kim, Ho-Joong;Kwon, O-Jung;Lee, Kyung-Soo;Rhee, Chong-H.
    • Tuberculosis and Respiratory Diseases
    • /
    • v.46 no.5
    • /
    • pp.685-696
    • /
    • 1999
  • Background : Idiopathic pulmonary fibrosis (IPF) is a diffuse inflammatory and fibrosing process that occurs within the interstitium and alveolus of the lung with invariably poor prognosis. The major problem in management of IPF results from the variable rate of disease progression and the difficulties in predicting the response to therapy. The purpose of this retrospective study was to evaluate the short-term efficacy of steroid and immunosuppressive therapy for IPF and to identify the pre-treatment determinants of favorable response. Method : Twenty patients of IPF were included. Diagnosis of IPF was proven by thoracoscopic lung biopsy and they were presumed to have active progressive disease. The baseline evaluation in these patients included clinical history, pulmonary function test, bronchoalveolar lavage (BAL), and chest high resolution computed tomography (HRCT). Fourteen patients received oral prednisolone treatment with initial dose of 1mg/kg/day for 8 to 12 weeks and then tapering to low-dose prednisolone (0.25mg/kg/day). Six patients who previously had experienced significant side effects to steroid received 2mg/kg/day of oral cyclophosphamide with or without low-dose prednisolone. Follow-up evaluation was performed after 6 months of therapy. If patients met more than one of followings, they were considered to be responders : (1) improvement of more than one grade in dyspnea index, (2) improvement in FVC or TLC more than 10% or improvement in DLco more than 20% (3) decreased extent of disease in chest HRCT findings. Result : One patient died of extrapulmonary cause after 3 month of therapy, and another patient gave up any further medical therapy due to side effect of steroid. Eventually medical records of 18 patients were analyzed. Nine of 18 patients were classified into responders and the other nine patients into nonresponders. The histopathologic diagnosis of the responders were all nonspecific interstitial pneumonia (NSIP) and that of nonresponders were all usual interstitial pneumonia (UIP) (p<0.001). The other significant differences between the two groups were female predominance (p<0.01), smoking history (p<0.001), severe grade of dyspnea (p<0.05), lymphocytosis in BAL fluid ($23.8{\pm}16.3%$ vs $7.8{\pm}3.6%$, p<0.05), and less honeycombing in chest HRCT findings (0% vs $9.2{\pm}2.3%$, p<0.001). Conclusion : Our results suggest that patients with histopathologic diagnosis of NSIP or lymphocytosis in BAL fluid are more likely to respond to steroid or immunosuppressive therapy. Clinical results in large numbers of IPF patients will be required to identify the independent variables.

  • PDF

Cosmetic Results of Conservative Treatment for Early Breast Cancer (조기유방암에서 유방보존수술 및 방사선치료후의 미용적 결과)

  • Kim Bo Kyoung;Shin Seong Soo;Kim Seong Deok;Ha Sung Whan;Noh Dong-Young
    • Radiation Oncology Journal
    • /
    • v.19 no.1
    • /
    • pp.21-26
    • /
    • 2001
  • Purpose : This study was peformed to evaluate the cosmetic outcome of conservative treatment for early breast cancer and to analyze the factors influencing cosmetic outcome. Materials and Methods : From February 1992 through January 1997, 120 patients with early breast cancer were treated with conservative surgery and postoperative radiotherapy. The types of conservative surgery were quadrantectomy and axillary node dissection for 108 patients $(90\%)$ and lumpectomy or excisional biopsy for 10 patients $(8.3\%)$. Forty six patients $(38\%)$ received adjuvant chemotherapy (CMF or CAF). Cosmetic result evaluation was carried out between 16 and 74 months (median, 33 months) after surgery. The cosmetic results were classified into four categories, i.e., excellent, good, fair, and poor. The appearances of the patients' breasts were also analyzed for symmetry using the differences in distances from the sternal notch to right and left nipples. A logistic regression analysis was performed to identify independent variables influencing the cosmetic outcome. Results : Cosmetic score was excellent or good in $76\%$ (91/120), fair in $19\%$ (23/120) and poor in $5\%$ (6/120) of the patients. Univariate analysis showed that tumor size (T1 versus T2) (p=0.04), axillary node status (N0 versus N1) (p=0.0002), extent of surgery (quadrantectomy versus lumpectomy or excisional biopsy) (p=0.02), axillary node irradiation (p=0.0005) and chemotherapy (p=0.0001) affected cosmetic score. Multivariate analysis revealed that extent of surgery (p=0.04) and chemotherapy (p=0.0002) were significant factors. For breast symmetry, univariate analysis confirmed exactly the same factors as above. Multivariate analysis revealed that tumor size (p=0.003) and lymph node status (p=0.007) affected breast symmetry. Conclusion : Conservative surgery and postoperative radiotherapy resulted in excellent or good cosmetic outcome in a large portion of the patients. Better cosmetic results were achieved generally in the group of patients with smaller tumor size, without axillary node metastasis and treated with less extensive surgery without chemotherapy.

  • PDF

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

A study on lead exposure indices of male workers exposed to lead less than 1 year in storage battery industries (축전지 제조업에서 입사 1년 미만 남자 사원들의 연 노출 지표치에 관한 연구)

  • HwangBo, Young;Kim, Yong-Bae;Lee, Gap-Soo;Lee, Sung-Soo;Ahn, Kyu-Dong;Lee, Byung-Kook;Kim, Joung-Soon
    • Journal of Preventive Medicine and Public Health
    • /
    • v.29 no.4 s.55
    • /
    • pp.747-764
    • /
    • 1996
  • This study intended to obtain an useful information for health management of lead exposed workers and determine biological monitoring interval in early period of exposure by measuring the lead exposure indices and work duration in all male workers (n=433 persons) exposed less than 1 year in 6 storage battery industries and in 49 males who are not exposed to lead as control. The examined variables were blood lead concentration (PBB), Zinc-protoporphyrin concentration (ZPP), Hemoglobin (HB) and personal history; also measured lead concentration in air (PBA) in the workplace. According to the geometric mean of lead concentration in the air, the factories were grouped into three categories: A; When it is below $0.05mg/m^3$, B; When it is between 0.05 and $0.10mg/m^3$, and C; When it is above $0.10mg/m^3$. The results obtained were as follows: 1. The means of blood lead concentration (PBB), ZPP concentration and hemoglobin(HB) in all male workers exposed to lead less than 1 year in storage battery industries were $29.5{\pm}12.4{\mu}g/100ml,\;52.9{\pm}30.0{\mu}g/100ml\;and\;15.2{\pm}1.1\;gm/100ml$. 2. The means of blood lead concentration (PBB), ZPP concentration and hemoglobin(HB) in control group were $5.8{\pm}1.6{\mu}g/100ml,\;30.8{\pm}12.7{\mu}g/100ml\;and\;15.7{\pm}1.6{\mu}g/100ml$, being much lower than that of study group exposed to lead. 3. The means of blood lead concentration and ZPP concentration among group A were $21.9{\pm}7.6{\mu}g/100,\;41.4{\pm}12.6{\mu}g/100ml$ ; those of group B were $29.8{\pm}11.6{\mu}g/100,\;52.6{\pm}27.9{\mu}g/100ml$ ; those of group C were $37.2{\pm}13.5{\mu}g/100,\;66.3{\pm}40.7{\mu}g/100ml$. Significant differences were found among three factory group(P<0.01) that was classified by the geometric mean of lead concentration in the air, group A being the lowest. 4. The mean of blood lead concentration of workers who have different work duration (month) was as follows ; When the work duration was $1\sim2$ month, it was $24.1{\pm}12.4{\mu}g/100ml$, ; When the work duration was $3\sim4$ month, it was $29.2{\pm}13.4{\mu}g/100ml$ ; and it was $28.9\sim34.5{\mu}g/100ml$ for the workers who had longer work duration than other. Significant differences were found among work duration group(P<0.05). 5. The mean of ZPP concentration of workers who have different work duration (month) was as follows ; When the work duration was $1\sim2$ month, it was $40.6{\pm}18.0{\mu}g/100ml$, ; When the work duration was $3\sim4$ month, it was $53.4{\pm}38.4{\mu}g/100ml$ ; and it was $51.5\sim60.4{\mu}g/100ml$ for the workers who had longer work duration than other. Significant differences were found among work duration group(P<0.05). 6. Among total workers(433 person), 18.2% had PBB concentration higher than $40{\mu}g/100ml$ and 7.1% had ZPP concentration higher than $100{\mu}g/100ml$ ; In workers of factory group A, those were 0.9% and 0.0% ; In workers of factory group B, those were 17.1% and 6.9% ; In workers of factory group C, those were 39.4% and 15.4%. 7. The proportions of total workers(433 person) with blood lead concentration lower than $25{\mu}g/100ml$ and ZPP concentration lower than $50{\mu}g/100ml$ were 39.7% and 61.9%, respectively ; In workers of factory group A, those were 65.5% and 82.3% : In workers of factory group B, those were 36.1% and 60.2% ; In workers of factory group C, those were 19.2% and 43.3%. 8. Blood lead concentration (r=0.177, P<0.01), ZPP concentration (r=0.135, P<0.01), log ZPP (r=0.170, P<0.01) and hemoglobin (r=0.096, P<0.05) showed statistically significant correlation with work duration (month). ZPP concentration (r=0.612, P<0.01) and log ZPP (r=0.614, P<0.01) showed statistically significant correlation with blood lead concentration 9. The slopes of simple linear regression between work duration(month, independent variable) and blood lead concentration (dependent variable) in workplace with low air concentration of lead was less steeper than that of poor working condition with high geometric mean air concentration of lead. The study result indicates that new employees should be provided with biological monitoring including blood lead concentration test and education about personal hygiene and work place management within $3\sim4$ month.

  • PDF