• Title/Summary/Keyword: Time-dependent

Search Result 6,979, Processing Time 0.042 seconds

Role of Oxygen Free Radical in the Expression of Interleukin-8 and Interleukin-$1{\beta}$ Gene in Mononuclear Phagocytic Cells (내독소에 의한 말초혈액 단핵구의 IL-8 및 IL-$1{\beta}$ 유전자 발현에서 산소기 역할에 관한 연구)

  • Kang, Min-Jong;Kim, Jae-Yeol;Park, Jae-Seok;Lee, Seung-Joon;Yoo, Chul-Gyu;Kim, Young-Whan;Han, Sung-Koo;Shim, Young-Soo
    • Tuberculosis and Respiratory Diseases
    • /
    • v.42 no.6
    • /
    • pp.862-870
    • /
    • 1995
  • Background: Oxygen free radicals have generally been considered as cytotoxic agents. On the other hand, recent results suggest that small nontoxic amounts of these radicals may act a role in intracellular signal transduction pathway and many efforts to reveal the role of these radicals as secondary messengers have been made. It is evident that the oxygen radicals are released by various cell types in response to extracellular stimuli including LPS, TNF, IL-1 and phorbol esters, all of which translocate the transcription factor $NF{\kappa}B$ from cytoplasm to nucleus by releasing an inhibitory protein subunit, $I{\kappa}B$. Activation of $NF{\kappa}B$ is mimicked by exposure to mild oxidant stress, and inhibited by agents that remove oxygen radicals. It means the cytoplasmic form of the inducible tanscription factor $NF{\kappa}B$ might provide a physiologically important target for oxygen radicals. At the same time, it is well known that LPS induces the release of oxygen radicals in neutrophil with the activation of $NF{\kappa}B$. From above facts, we can assume the expression of IL-8 and IL-$1{\beta}$ gene by LPS stimulation may occur through the activation of $NF{\kappa}B$, which is mediated through the release of $I{\kappa}B$ by increasing amounts of oxygen radicals. But definitive evidence is lacking about the role of oxygen free radicals in the expression of IL-8 and IL-$1{\beta}$ gene in mononuclear phagocytic cells. We conducted a study to determine whether oxygen radicals act a role in the expression of IL-8 and IL-$1{\beta}$ gene in mononuclear phagocytic cells. Method: Human peripheral blood monocytes were isolated from healthy volunteers. Time and dose relationship of $H_2O_2$-induced IL-8 and IL-$1{\beta}$ mRNA expression was observed by Northern blot analysis. To evaluate the role of oxygen radicals in the expression of IL-8 and IL-$1{\beta}$ mRNA by LPS stimulation, pretreatment of various antioxiants including PDTC, TMTU, NAC, ME, Desferrioxamine were done and Northern blot analysis for IL-8 and IL-$1{\beta}$ mRNA was performed. Results: In PBMC, dose and time dependent expression of IL-8 and IL-$1{\beta}$ mRNA by exogenous $H_2O_2$ was not observed. But various antioxidants suppressed the expression of LPS-induced IL-8 and IL-$1{\beta}$ mRNA expression of PBMC and the suppressive activity was most prominant when the pretreatment was done with TMTU. Conclusion: Oxygen free radical may have some role in the expression of IL-8 and IL-$1{\beta}$ mRNA of PBMC but that radical might not be $H_2O_2$.

  • PDF

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

Analysis of Greenhouse Thermal Environment by Model Simulation (시뮬레이션 모형에 의한 온실의 열환경 분석)

  • 서원명;윤용철
    • Journal of Bio-Environment Control
    • /
    • v.5 no.2
    • /
    • pp.215-235
    • /
    • 1996
  • The thermal analysis by mathematical model simulation makes it possible to reasonably predict heating and/or cooling requirements of certain greenhouses located under various geographical and climatic environment. It is another advantages of model simulation technique to be able to make it possible to select appropriate heating system, to set up energy utilization strategy, to schedule seasonal crop pattern, as well as to determine new greenhouse ranges. In this study, the control pattern for greenhouse microclimate is categorized as cooling and heating. Dynamic model was adopted to simulate heating requirements and/or energy conservation effectiveness such as energy saving by night-time thermal curtain, estimation of Heating Degree-Hours(HDH), long time prediction of greenhouse thermal behavior, etc. On the other hand, the cooling effects of ventilation, shading, and pad ||||&|||| fan system were partly analyzed by static model. By the experimental work with small size model greenhouse of 1.2m$\times$2.4m, it was found that cooling the greenhouse by spraying cold water directly on greenhouse cover surface or by recirculating cold water through heat exchangers would be effective in greenhouse summer cooling. The mathematical model developed for greenhouse model simulation is highly applicable because it can reflects various climatic factors like temperature, humidity, beam and diffuse solar radiation, wind velocity, etc. This model was closely verified by various weather data obtained through long period greenhouse experiment. Most of the materials relating with greenhouse heating or cooling components were obtained from model greenhouse simulated mathematically by using typical year(1987) data of Jinju Gyeongnam. But some of the materials relating with greenhouse cooling was obtained by performing model experiments which include analyzing cooling effect of water sprayed directly on greenhouse roof surface. The results are summarized as follows : 1. The heating requirements of model greenhouse were highly related with the minimum temperature set for given greenhouse. The setting temperature at night-time is much more influential on heating energy requirement than that at day-time. Therefore It is highly recommended that night- time setting temperature should be carefully determined and controlled. 2. The HDH data obtained by conventional method were estimated on the basis of considerably long term average weather temperature together with the standard base temperature(usually 18.3$^{\circ}C$). This kind of data can merely be used as a relative comparison criteria about heating load, but is not applicable in the calculation of greenhouse heating requirements because of the limited consideration of climatic factors and inappropriate base temperature. By comparing the HDM data with the results of simulation, it is found that the heating system design by HDH data will probably overshoot the actual heating requirement. 3. The energy saving effect of night-time thermal curtain as well as estimated heating requirement is found to be sensitively related with weather condition: Thermal curtain adopted for simulation showed high effectiveness in energy saving which amounts to more than 50% of annual heating requirement. 4. The ventilation performances doting warm seasons are mainly influenced by air exchange rate even though there are some variations depending on greenhouse structural difference, weather and cropping conditions. For air exchanges above 1 volume per minute, the reduction rate of temperature rise on both types of considered greenhouse becomes modest with the additional increase of ventilation capacity. Therefore the desirable ventilation capacity is assumed to be 1 air change per minute, which is the recommended ventilation rate in common greenhouse. 5. In glass covered greenhouse with full production, under clear weather of 50% RH, and continuous 1 air change per minute, the temperature drop in 50% shaded greenhouse and pad & fan systemed greenhouse is 2.6$^{\circ}C$ and.6.1$^{\circ}C$ respectively. The temperature in control greenhouse under continuous air change at this time was 36.6$^{\circ}C$ which was 5.3$^{\circ}C$ above ambient temperature. As a result the greenhouse temperature can be maintained 3$^{\circ}C$ below ambient temperature. But when RH is 80%, it was impossible to drop greenhouse temperature below ambient temperature because possible temperature reduction by pad ||||&|||| fan system at this time is not more than 2.4$^{\circ}C$. 6. During 3 months of hot summer season if the greenhouse is assumed to be cooled only when greenhouse temperature rise above 27$^{\circ}C$, the relationship between RH of ambient air and greenhouse temperature drop($\Delta$T) was formulated as follows : $\Delta$T= -0.077RH+7.7 7. Time dependent cooling effects performed by operation of each or combination of ventilation, 50% shading, pad & fan of 80% efficiency, were continuously predicted for one typical summer day long. When the greenhouse was cooled only by 1 air change per minute, greenhouse air temperature was 5$^{\circ}C$ above outdoor temperature. Either method alone can not drop greenhouse air temperature below outdoor temperature even under the fully cropped situations. But when both systems were operated together, greenhouse air temperature can be controlled to about 2.0-2.3$^{\circ}C$ below ambient temperature. 8. When the cool water of 6.5-8.5$^{\circ}C$ was sprayed on greenhouse roof surface with the water flow rate of 1.3 liter/min per unit greenhouse floor area, greenhouse air temperature could be dropped down to 16.5-18.$0^{\circ}C$, whlch is about 1$0^{\circ}C$ below the ambient temperature of 26.5-28.$0^{\circ}C$ at that time. The most important thing in cooling greenhouse air effectively with water spray may be obtaining plenty of cool water source like ground water itself or cold water produced by heat-pump. Future work is focused on not only analyzing the feasibility of heat pump operation but also finding the relationships between greenhouse air temperature(T$_{g}$ ), spraying water temperature(T$_{w}$ ), water flow rate(Q), and ambient temperature(T$_{o}$).

  • PDF

The Effect of Common Features on Consumer Preference for a No-Choice Option: The Moderating Role of Regulatory Focus (재몰유선택적정황하공동특성대우고객희호적영향(在没有选择的情况下共同特性对于顾客喜好的影响): 조절초점적조절작용(调节焦点的调节作用))

  • Park, Jong-Chul;Kim, Kyung-Jin
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.89-97
    • /
    • 2010
  • This study researches the effects of common features on a no-choice option with respect to regulatory focus theory. The primary interest is in three factors and their interrelationship: common features, no-choice option, and regulatory focus. Prior studies have compiled vast body of research in these areas. First, the "common features effect" has been observed bymany noted marketing researchers. Tversky (1972) proposed the seminal theory, the EBA model: elimination by aspect. According to this theory, consumers are prone to focus only on unique features during comparison processing, thereby dismissing any common features as redundant information. Recently, however, more provocative ideas have attacked the EBA model by asserting that common features really do affect consumer judgment. Chernev (1997) first reported that adding common features mitigates the choice gap because of the increasing perception of similarity among alternatives. Later, however, Chernev (2001) published a critically developed study against his prior perspective with the proposition that common features may be a cognitive load to consumers, and thus consumers are possible that they are prone to prefer the heuristic processing to the systematic processing. This tends to bring one question to the forefront: Do "common features" affect consumer choice? If so, what are the concrete effects? This study tries to answer the question with respect to the "no-choice" option and regulatory focus. Second, some researchers hold that the no-choice option is another best alternative of consumers, who are likely to avoid having to choose in the context of knotty trade-off settings or mental conflicts. Hope for the future also may increase the no-choice option in the context of optimism or the expectancy of a more satisfactory alternative appearing later. Other issues reported in this domain are time pressure, consumer confidence, and alternative numbers (Dhar and Nowlis 1999; Lin and Wu 2005; Zakay and Tsal 1993). This study casts the no-choice option in yet another perspective: the interactive effects between common features and regulatory focus. Third, "regulatory focus theory" is a very popular theme in recent marketing research. It suggests that consumers have two focal goals facing each other: promotion vs. prevention. A promotion focus deals with the concepts of hope, inspiration, achievement, or gain, whereas prevention focus involves duty, responsibility, safety, or loss-aversion. Thus, while consumers with a promotion focus tend to take risks for gain, the same does not hold true for a prevention focus. Regulatory focus theory predicts consumers' emotions, creativity, attitudes, memory, performance, and judgment, as documented in a vast field of marketing and psychology articles. The perspective of the current study in exploring consumer choice and common features is a somewhat creative viewpoint in the area of regulatory focus. These reviews inspire this study of the interaction possibility between regulatory focus and common features with a no-choice option. Specifically, adding common features rather than omitting them may increase the no-choice option ratio in the choice setting only to prevention-focused consumers, but vice versa to promotion-focused consumers. The reasoning is that when prevention-focused consumers come in contact with common features, they may perceive higher similarity among the alternatives. This conflict among similar options would increase the no-choice ratio. Promotion-focused consumers, however, are possible that they perceive common features as a cue of confirmation bias. And thus their confirmation processing would make their prior preference more robust, then the no-choice ratio may shrink. This logic is verified in two experiments. The first is a $2{\times}2$ between-subject design (whether common features or not X regulatory focus) using a digital cameras as the relevant stimulus-a product very familiar to young subjects. Specifically, the regulatory focus variable is median split through a measure of eleven items. Common features included zoom, weight, memory, and battery, whereas the other two attributes (pixel and price) were unique features. Results supported our hypothesis that adding common features enhanced the no-choice ratio only to prevention-focus consumers, not to those with a promotion focus. These results confirm our hypothesis - the interactive effects between a regulatory focus and the common features. Prior research had suggested that including common features had a effect on consumer choice, but this study shows that common features affect choice by consumer segmentation. The second experiment was used to replicate the results of the first experiment. This experimental study is equal to the prior except only two - priming manipulation and another stimulus. For the promotion focus condition, subjects had to write an essay using words such as profit, inspiration, pleasure, achievement, development, hedonic, change, pursuit, etc. For prevention, however, they had to use the words persistence, safety, protection, aversion, loss, responsibility, stability etc. The room for rent had common features (sunshine, facility, ventilation) and unique features (distance time and building state). These attributes implied various levels and valence for replication of the prior experiment. Our hypothesis was supported repeatedly in the results, and the interaction effects were significant between regulatory focus and common features. Thus, these studies showed the dual effects of common features on consumer choice for a no-choice option. Adding common features may enhance or mitigate no-choice, contradictory as it may sound. Under a prevention focus, adding common features is likely to enhance the no-choice ratio because of increasing mental conflict; under the promotion focus, it is prone to shrink the ratio perhaps because of a "confirmation bias." The research has practical and theoretical implications for marketers, who may need to consider common features carefully in a practical display context according to consumer segmentation (i.e., promotion vs. prevention focus.) Theoretically, the results suggest some meaningful moderator variable between common features and no-choice in that the effect on no-choice option is partly dependent on a regulatory focus. This variable corresponds not only to a chronic perspective but also a situational perspective in our hypothesis domain. Finally, in light of some shortcomings in the research, such as overlooked attribute importance, low ratio of no-choice, or the external validity issue, we hope it influences future studies to explore the little-known world of the "no-choice option."

An Empirical Study on the Determinants of Supply Chain Management Systems Success from Vendor's Perspective (참여자관점에서 공급사슬관리 시스템의 성공에 영향을 미치는 요인에 관한 실증연구)

  • Kang, Sung-Bae;Moon, Tae-Soo;Chung, Yoon
    • Asia pacific journal of information systems
    • /
    • v.20 no.3
    • /
    • pp.139-166
    • /
    • 2010
  • The supply chain management (SCM) systems have emerged as strong managerial tools for manufacturing firms in enhancing competitive strength. Despite of large investments in the SCM systems, many companies are not fully realizing the promised benefits from the systems. A review of literature on adoption, implementation and success factor of IOS (inter-organization systems), EDI (electronic data interchange) systems, shows that this issue has been examined from multiple theoretic perspectives. And many researchers have attempted to identify the factors which influence the success of system implementation. However, the existing studies have two drawbacks in revealing the determinants of systems implementation success. First, previous researches raise questions as to the appropriateness of research subjects selected. Most SCM systems are operating in the form of private industrial networks, where the participants of the systems consist of two distinct groups: focus companies and vendors. The focus companies are the primary actors in developing and operating the systems, while vendors are passive participants which are connected to the system in order to supply raw materials and parts to the focus companies. Under the circumstance, there are three ways in selecting the research subjects; focus companies only, vendors only, or two parties grouped together. It is hard to find researches that use the focus companies exclusively as the subjects probably due to the insufficient sample size for statistic analysis. Most researches have been conducted using the data collected from both groups. We argue that the SCM success factors cannot be correctly indentified in this case. The focus companies and the vendors are in different positions in many areas regarding the system implementation: firm size, managerial resources, bargaining power, organizational maturity, and etc. There are no obvious reasons to believe that the success factors of the two groups are identical. Grouping the two groups also raises questions on measuring the system success. The benefits from utilizing the systems may not be commonly distributed to the two groups. One group's benefits might be realized at the expenses of the other group considering the situation where vendors participating in SCM systems are under continuous pressures from the focus companies with respect to prices, quality, and delivery time. Therefore, by combining the system outcomes of both groups we cannot measure the system benefits obtained by each group correctly. Second, the measures of system success adopted in the previous researches have shortcoming in measuring the SCM success. User satisfaction, system utilization, and user attitudes toward the systems are most commonly used success measures in the existing studies. These measures have been developed as proxy variables in the studies of decision support systems (DSS) where the contribution of the systems to the organization performance is very difficult to measure. Unlike the DSS, the SCM systems have more specific goals, such as cost saving, inventory reduction, quality improvement, rapid time, and higher customer service. We maintain that more specific measures can be developed instead of proxy variables in order to measure the system benefits correctly. The purpose of this study is to find the determinants of SCM systems success in the perspective of vendor companies. In developing the research model, we have focused on selecting the success factors appropriate for the vendors through reviewing past researches and on developing more accurate success measures. The variables can be classified into following: technological, organizational, and environmental factors on the basis of TOE (Technology-Organization-Environment) framework. The model consists of three independent variables (competition intensity, top management support, and information system maturity), one mediating variable (collaboration), one moderating variable (government support), and a dependent variable (system success). The systems success measures have been developed to reflect the operational benefits of the SCM systems; improvement in planning and analysis capabilities, faster throughput, cost reduction, task integration, and improved product and customer service. The model has been validated using the survey data collected from 122 vendors participating in the SCM systems in Korea. To test for mediation, one should estimate the hierarchical regression analysis on the collaboration. And moderating effect analysis should estimate the moderated multiple regression, examines the effect of the government support. The result shows that information system maturity and top management support are the most important determinants of SCM system success. Supply chain technologies that standardize data formats and enhance information sharing may be adopted by supply chain leader organization because of the influence of focal company in the private industrial networks in order to streamline transactions and improve inter-organization communication. Specially, the need to develop and sustain an information system maturity will provide the focus and purpose to successfully overcome information system obstacles and resistance to innovation diffusion within the supply chain network organization. The support of top management will help focus efforts toward the realization of inter-organizational benefits and lend credibility to functional managers responsible for its implementation. The active involvement, vision, and direction of high level executives provide the impetus needed to sustain the implementation of SCM. The quality of collaboration relationships also is positively related to outcome variable. Collaboration variable is found to have a mediation effect between on influencing factors and implementation success. Higher levels of inter-organizational collaboration behaviors such as shared planning and flexibility in coordinating activities were found to be strongly linked to the vendors trust in the supply chain network. Government support moderates the effect of the IS maturity, competitive intensity, top management support on collaboration and implementation success of SCM. In general, the vendor companies face substantially greater risks in SCM implementation than the larger companies do because of severe constraints on financial and human resources and limited education on SCM systems. Besides resources, Vendors generally lack computer experience and do not have sufficient internal SCM expertise. For these reasons, government supports may establish requirements for firms doing business with the government or provide incentives to adopt, implementation SCM or practices. Government support provides significant improvements in implementation success of SCM when IS maturity, competitive intensity, top management support and collaboration are low. The environmental characteristic of competition intensity has no direct effect on vendor perspective of SCM system success. But, vendors facing above average competition intensity will have a greater need for changing technology. This suggests that companies trying to implement SCM systems should set up compatible supply chain networks and a high-quality collaboration relationship for implementation and performance.

The Effect of Attributes of Innovation and Perceived Risk on Product Attitudes and Intention to Adopt Smart Wear (스마트 의류의 혁신속성과 지각된 위험이 제품 태도 및 수용의도에 미치는 영향)

  • Ko, Eun-Ju;Sung, Hee-Won;Yoon, Hye-Rim
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.2
    • /
    • pp.89-111
    • /
    • 2008
  • Due to the development of digital technology, studies regarding smart wear integrating daily life have rapidly increased. However, consumer research about perception and attitude toward smart clothing hardly could find. The purpose of this study was to identify innovative characteristics and perceived risk of smart clothing and to analyze the influences of theses factors on product attitudes and intention to adopt. Specifically, five hypotheses were established. H1: Perceived attributes of smart clothing except for complexity would have positive relations to product attitude or purchase intention, while complexity would be opposite. H2: Product attitude would have positive relation to purchase intention. H3: Product attitude would have a mediating effect between perceived attributes and purchase intention. H4: Perceived risks of smart clothing would have negative relations to perceived attributes except for complexity, and positive relations to complexity. H5: Product attitude would have a mediating effect between perceived risks and purchase intention. A self-administered questionnaire was developed based on previous studies. After pretest, the data were collected during September, 2006, from university students in Korea who were relatively sensitive to innovative products. A total of 300 final useful questionnaire were analyzed by SPSS 13.0 program. About 60.3% were male with the mean age of 21.3 years old. About 59.3% reported that they were aware of smart clothing, but only 9 respondents purchased it. The mean of attitudes toward smart clothing and purchase intention was 2.96 (SD=.56) and 2.63 (SD=.65) respectively. Factor analysis using principal components with varimax rotation was conducted to identify perceived attribute and perceived risk dimensions. Perceived attributes of smart wear were categorized into relative advantage (including compatibility), observability (including triability), and complexity. Perceived risks were identified into physical/performance risk, social psychological risk, time loss risk, and economic risk. Regression analysis was conducted to test five hypotheses. Relative advantage and observability were significant predictors of product attitude (adj $R^2$=.223) and purchase intention (adj $R^2$=.221). Complexity showed negative influence on product attitude. Product attitude presented significant relation to purchase intention (adj $R^2$=.692) and partial mediating effect between perceived attributes and purchase intention (adj $R^2$=.698). Therefore hypothesis one to three were accepted. In order to test hypothesis four, four dimensions of perceived risk and demographic variables (age, gender, monthly household income, awareness of smart clothing, and purchase experience) were entered as independent variables in the regression models. Social psychological risk, economic risk, and gender (female) were significant to predict relative advantage (adj $R^2$=.276). When perceived observability was a dependent variable, social psychological risk, time loss risk, physical/performance risk, and age (younger) were significant in order (adj $R^2$=.144). However, physical/performance risk was positively related to observability. The more Koreans seemed to be observable of smart clothing, the more increased the probability of physical harm or performance problems received. Complexity was predicted by product awareness, social psychological risk, economic risk, and purchase experience in order (adj $R^2$=.114). Product awareness was negatively related to complexity, meaning high level of product awareness would reduce complexity of smart clothing. However, purchase experience presented positive relation with complexity. It appears that consumers can perceive high level of complexity when they are actually consuming smart clothing in real life. Risk variables were positively related with complexity. That is, in order to decrease complexity, it is also necessary to consider minimizing anxiety factors about social psychological wound or loss of money. Thus, hypothesis 4 was partially accepted. Finally, in testing hypothesis 5, social psychological risk and economic risk were significant predictors for product attitude (adj $R^2$=.122) and purchase intention (adj $R^2$=.099) respectively. When attitude variable was included with risk variables as independent variables in the regression model to predict purchase intention, only attitude variable was significant (adj $R^2$=.691). Thus attitude variable presented full mediating effect between perceived risks and purchase intention, and hypothesis 5 was accepted. Findings would provide guidelines for fashion and electronic businesses who aim to create and strengthen positive attitude toward smart clothing. Marketers need to consider not only functional feature of smart clothing, but also practical and aesthetic attributes, since appropriateness for social norm or self image would reduce uncertainty of psychological or social risk, which increase relative advantage of smart clothing. Actually social psychological risk was significantly associated to relative advantage. Economic risk is negatively associated with product attitudes as well as purchase intention, suggesting that smart-wear developers have to reflect on price ranges of potential adopters. It will be effective to utilize the findings associated with complexity when marketers in US plan communication strategy.

  • PDF

Differential Effects of Recovery Efforts on Products Attitudes (제품태도에 대한 회복노력의 차별적 효과)

  • Kim, Cheon-GIl;Choi, Jung-Mi
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.1
    • /
    • pp.33-58
    • /
    • 2008
  • Previous research has presupposed that the evaluation of consumer who received any recovery after experiencing product failure should be better than the evaluation of consumer who did not receive any recovery. The major purposes of this article are to examine impacts of product defect failures rather than service failures, and to explore effects of recovery on postrecovery product attitudes. First, this article deals with the occurrence of severe and unsevere failure and corresponding service recovery toward tangible products rather than intangible services. Contrary to intangible services, purchase and usage are separable for tangible products. This difference makes it clear that executing an recovery strategy toward tangible products is not plausible right after consumers find out product failures. The consumers may think about backgrounds and causes for the unpleasant events during the time gap between product failure and recovery. The deliberation may dilutes positive effects of recovery efforts. The recovery strategies which are provided to consumers experiencing product failures can be classified into three types. A recovery strategy can be implemented to provide consumers with a new product replacing the old defective product, a complimentary product for free, a discount at the time of the failure incident, or a coupon that can be used on the next visit. This strategy is defined as "a rewarding effort." Meanwhile a product failure may arise in exchange for its benefit. Then the product provider can suggest a detail explanation that the defect is hard to escape since it relates highly to the specific advantage to the product. The strategy may be called as "a strengthening effort." Another possible strategy is to recover negative attitude toward own brand by giving prominence to the disadvantages of a competing brand rather than the advantages of its own brand. The strategy is reflected as "a weakening effort." This paper emphasizes that, in order to confirm its effectiveness, a recovery strategy should be compared to being nothing done in response to the product failure. So the three types of recovery efforts is discussed in comparison to the situation involving no recovery effort. The strengthening strategy is to claim high relatedness of the product failure with another advantage, and expects the two-sidedness to ease consumers' complaints. The weakening strategy is to emphasize non-aversiveness of product failure, even if consumers choose another competitive brand. The two strategies can be effective in restoring to the original state, by providing plausible motives to accept the condition of product failure or by informing consumers of non-responsibility in the failure case. However the two may be less effective strategies than the rewarding strategy, since it tries to take care of the rehabilitation needs of consumers. Especially, the relative effect between the strengthening effort and the weakening effort may differ in terms of the severity of the product failure. A consumer who realizes a highly severe failure is likely to attach importance to the property which caused the failure. This implies that the strengthening effort would be less effective under the condition of high product severity. Meanwhile, the failing property is not diagnostic information in the condition of low failure severity. Consumers would not pay attention to non-diagnostic information, and with which they are not likely to change their attitudes. This implies that the strengthening effort would be more effective under the condition of low product severity. A 2 (product failure severity: high or low) X 4 (recovery strategies: rewarding, strengthening, weakening, or doing nothing) between-subjects design was employed. The particular levels of product failure severity and the types of recovery strategies were determined after a series of expert interviews. The dependent variable was product attitude after the recovery effort was provided. Subjects were 284 consumers who had an experience of cosmetics. Subjects were first given a product failure scenario and were asked to rate the comprehensibility of the failure scenario, the probability of raising complaints against the failure, and the subjective severity of the failure. After a recovery scenario was presented, its comprehensibility and overall evaluation were measured. The subjects assigned to the condition of no recovery effort were exposed to a short news article on the cosmetic industry. Next, subjects answered filler questions: 42 items of the need for cognitive closure and 16 items of need-to-evaluate. In the succeeding page a subject's product attitude was measured on an five-item, six-point scale, and a subject's repurchase intention on an three-item, six-point scale. After demographic variables of age and sex were asked, ten items of the subject's objective knowledge was checked. The results showed that the subjects formed more favorable evaluations after receiving rewarding efforts than after receiving either strengthening or weakening efforts. This is consistent with Hoffman, Kelley, and Rotalsky (1995) in that a tangible service recovery could be more effective that intangible efforts. Strengthening and weakening efforts also were effective compared to no recovery effort. So we found that generally any recovery increased products attitudes. The results hint us that a recovery strategy such as strengthening or weakening efforts, although it does not contain a specific reward, may have an effect on consumers experiencing severe unsatisfaction and strong complaint. Meanwhile, strengthening and weakening efforts were not expected to increase product attitudes under the condition of low severity of product failure. We can conclude that only a physical recovery effort may be recognized favorably as a firm's willingness to recover its fault by consumers experiencing low involvements. Results of the present experiment are explained in terms of the attribution theory. This article has a limitation that it utilized fictitious scenarios. Future research deserves to test a realistic effect of recovery for actual consumers. Recovery involves a direct, firsthand experience of ex-users. Recovery does not apply to non-users. The experience of receiving recovery efforts can be relatively more salient and accessible for the ex-users than for non-users. A recovery effort might be more likely to improve product attitude for the ex-users than for non-users. Also the present experiment did not include consumers who did not have an experience of the products and who did not perceive the occurrence of product failure. For the non-users and the ignorant consumers, the recovery efforts might lead to decreased product attitude and purchase intention. This is because the recovery trials may give an opportunity for them to notice the product failure.

  • PDF

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

A study on the Success Factors and Strategy of Information Technology Investment Based on Intelligent Economic Simulation Modeling (지능형 시뮬레이션 모형을 기반으로 한 정보기술 투자 성과 요인 및 전략 도출에 관한 연구)

  • Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.35-55
    • /
    • 2013
  • Information technology is a critical resource necessary for any company hoping to support and realize its strategic goals, which contribute to growth promotion and sustainable development. The selection of information technology and its strategic use are imperative for the enhanced performance of every aspect of company management, leading a wide range of companies to have invested continuously in information technology. Despite researchers, managers, and policy makers' keen interest in how information technology contributes to organizational performance, there is uncertainty and debate about the result of information technology investment. In other words, researchers and managers cannot easily identify the independent factors that can impact the investment performance of information technology. This is mainly owing to the fact that many factors, ranging from the internal components of a company, strategies, and external customers, are interconnected with the investment performance of information technology. Using an agent-based simulation technique, this research extracts factors expected to affect investment performance on information technology, simplifies the analyses of their relationship with economic modeling, and examines the performance dependent on changes in the factors. In terms of economic modeling, I expand the model that highlights the way in which product quality moderates the relationship between information technology investments and economic performance (Thatcher and Pingry, 2004) by considering the cost of information technology investment and the demand creation resulting from product quality enhancement. For quality enhancement and its consequences for demand creation, I apply the concept of information quality and decision-maker quality (Raghunathan, 1999). This concept implies that the investment on information technology improves the quality of information, which, in turn, improves decision quality and performance, thus enhancing the level of product or service quality. Additionally, I consider the effect of word of mouth among consumers, which creates new demand for a product or service through the information diffusion effect. This demand creation is analyzed with an agent-based simulation model that is widely used for network analyses. Results show that the investment on information technology enhances the quality of a company's product or service, which indirectly affects the economic performance of that company, particularly with regard to factors such as consumer surplus, company profit, and company productivity. Specifically, when a company makes its initial investment in information technology, the resultant increase in the quality of a company's product or service immediately has a positive effect on consumer surplus, but the investment cost has a negative effect on company productivity and profit. As time goes by, the enhancement of the quality of that company's product or service creates new consumer demand through the information diffusion effect. Finally, the new demand positively affects the company's profit and productivity. In terms of the investment strategy for information technology, this study's results also reveal that the selection of information technology needs to be based on analysis of service and the network effect of customers, and demonstrate that information technology implementation should fit into the company's business strategy. Specifically, if a company seeks the short-term enhancement of company performance, it needs to have a one-shot strategy (making a large investment at one time). On the other hand, if a company seeks a long-term sustainable profit structure, it needs to have a split strategy (making several small investments at different times). The findings from this study make several contributions to the literature. In terms of methodology, the study integrates both economic modeling and simulation technique in order to overcome the limitations of each methodology. It also indicates the mediating effect of product quality on the relationship between information technology and the performance of a company. Finally, it analyzes the effect of information technology investment strategies and information diffusion among consumers on the investment performance of information technology.