• Title/Summary/Keyword: robust model

Search Result 2,738, Processing Time 0.031 seconds

A PLS Path Modeling Approach on the Cause-and-Effect Relationships among BSC Critical Success Factors for IT Organizations (PLS 경로모형을 이용한 IT 조직의 BSC 성공요인간의 인과관계 분석)

  • Lee, Jung-Hoon;Shin, Taek-Soo;Lim, Jong-Ho
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.207-228
    • /
    • 2007
  • Measuring Information Technology(IT) organizations' activities have been limited to mainly measure financial indicators for a long time. However, according to the multifarious functions of Information System, a number of researches have been done for the new trends on measurement methodologies that come with financial measurement as well as new measurement methods. Especially, the researches on IT Balanced Scorecard(BSC), concept from BSC measuring IT activities have been done as well in recent years. BSC provides more advantages than only integration of non-financial measures in a performance measurement system. The core of BSC rests on the cause-and-effect relationships between measures to allow prediction of value chain performance measures to allow prediction of value chain performance measures, communication, and realization of the corporate strategy and incentive controlled actions. More recently, BSC proponents have focused on the need to tie measures together into a causal chain of performance, and to test the validity of these hypothesized effects to guide the development of strategy. Kaplan and Norton[2001] argue that one of the primary benefits of the balanced scorecard is its use in gauging the success of strategy. Norreklit[2000] insist that the cause-and-effect chain is central to the balanced scorecard. The cause-and-effect chain is also central to the IT BSC. However, prior researches on relationship between information system and enterprise strategies as well as connection between various IT performance measurement indicators are not so much studied. Ittner et al.[2003] report that 77% of all surveyed companies with an implemented BSC place no or only little interest on soundly modeled cause-and-effect relationships despite of the importance of cause-and-effect chains as an integral part of BSC. This shortcoming can be explained with one theoretical and one practical reason[Blumenberg and Hinz, 2006]. From a theoretical point of view, causalities within the BSC method and their application are only vaguely described by Kaplan and Norton. From a practical consideration, modeling corporate causalities is a complex task due to tedious data acquisition and following reliability maintenance. However, cause-and effect relationships are an essential part of BSCs because they differentiate performance measurement systems like BSCs from simple key performance indicator(KPI) lists. KPI lists present an ad-hoc collection of measures to managers but do not allow for a comprehensive view on corporate performance. Instead, performance measurement system like BSCs tries to model the relationships of the underlying value chain in cause-and-effect relationships. Therefore, to overcome the deficiencies of causal modeling in IT BSC, sound and robust causal modeling approaches are required in theory as well as in practice for offering a solution. The propose of this study is to suggest critical success factors(CSFs) and KPIs for measuring performance for IT organizations and empirically validate the casual relationships between those CSFs. For this purpose, we define four perspectives of BSC for IT organizations according to Van Grembergen's study[2000] as follows. The Future Orientation perspective represents the human and technology resources needed by IT to deliver its services. The Operational Excellence perspective represents the IT processes employed to develop and deliver the applications. The User Orientation perspective represents the user evaluation of IT. The Business Contribution perspective captures the business value of the IT investments. Each of these perspectives has to be translated into corresponding metrics and measures that assess the current situations. This study suggests 12 CSFs for IT BSC based on the previous IT BSC's studies and COBIT 4.1. These CSFs consist of 51 KPIs. We defines the cause-and-effect relationships among BSC CSFs for IT Organizations as follows. The Future Orientation perspective will have positive effects on the Operational Excellence perspective. Then the Operational Excellence perspective will have positive effects on the User Orientation perspective. Finally, the User Orientation perspective will have positive effects on the Business Contribution perspective. This research tests the validity of these hypothesized casual effects and the sub-hypothesized causal relationships. For the purpose, we used the Partial Least Squares approach to Structural Equation Modeling(or PLS Path Modeling) for analyzing multiple IT BSC CSFs. The PLS path modeling has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. Recently the use of PLS path modeling has been gaining interests and use among IS researchers in recent years because of its ability to model latent constructs under conditions of nonormality and with small to medium sample sizes(Chin et al., 2003). The empirical results of our study using PLS path modeling show that the casual effects in IT BSC significantly exist partially in our hypotheses.

Seismic Data Processing and Inversion for Characterization of CO2 Storage Prospect in Ulleung Basin, East Sea (동해 울릉분지 CO2 저장소 특성 분석을 위한 탄성파 자료처리 및 역산)

  • Lee, Ho Yong;Kim, Min Jun;Park, Myong-Ho
    • Economic and Environmental Geology
    • /
    • v.48 no.1
    • /
    • pp.25-39
    • /
    • 2015
  • $CO_2$ geological storage plays an important role in reduction of greenhouse gas emissions, but there is a lack of research for CCS demonstration. To achieve the goal of CCS, storing $CO_2$ safely and permanently in underground geological formations, it is essential to understand the characteristics of them, such as total storage capacity, stability, etc. and establish an injection strategy. We perform the impedance inversion for the seismic data acquired from the Ulleung Basin in 2012. To review the possibility of $CO_2$ storage, we also construct porosity models and extract attributes of the prospects from the seismic data. To improve the quality of seismic data, amplitude preserved processing methods, SWD(Shallow Water Demultiple), SRME(Surface Related Multiple Elimination) and Radon Demultiple, are applied. Three well log data are also analysed, and the log correlations of each well are 0.648, 0.574 and 0.342, respectively. All wells are used in building the low-frequency model to generate more robust initial model. Simultaneous pre-stack inversion is performed on all of the 2D profiles and inverted P-impedance, S-impedance and Vp/Vs ratio are generated from the inversion process. With the porosity profiles generated from the seismic inversion process, the porous and non-porous zones can be identified for the purpose of the $CO_2$ sequestration initiative. More detailed characterization of the geological storage and the simulation of $CO_2$ migration might be an essential for the CCS demonstration.

Wildfire Severity Mapping Using Sentinel Satellite Data Based on Machine Learning Approaches (Sentinel 위성영상과 기계학습을 이용한 국내산불 피해강도 탐지)

  • Sim, Seongmun;Kim, Woohyeok;Lee, Jaese;Kang, Yoojin;Im, Jungho;Kwon, Chunguen;Kim, Sungyong
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1109-1123
    • /
    • 2020
  • In South Korea with forest as a major land cover class (over 60% of the country), many wildfires occur every year. Wildfires weaken the shear strength of the soil, forming a layer of soil that is vulnerable to landslides. It is important to identify the severity of a wildfire as well as the burned area to sustainably manage the forest. Although satellite remote sensing has been widely used to map wildfire severity, it is often difficult to determine the severity using only the temporal change of satellite-derived indices such as Normalized Difference Vegetation Index (NDVI) and Normalized Burn Ratio (NBR). In this study, we proposed an approach for determining wildfire severity based on machine learning through the synergistic use of Sentinel-1A Synthetic Aperture Radar-C data and Sentinel-2A Multi Spectral Instrument data. Three wildfire cases-Samcheok in May 2017, Gangreung·Donghae in April 2019, and Gosung·Sokcho in April 2019-were used for developing wildfire severity mapping models with three machine learning algorithms (i.e., Random Forest, Logistic Regression, and Support Vector Machine). The results showed that the random forest model yielded the best performance, resulting in an overall accuracy of 82.3%. The cross-site validation to examine the spatiotemporal transferability of the machine learning models showed that the models were highly sensitive to temporal differences between the training and validation sites, especially in the early growing season. This implies that a more robust model with high spatiotemporal transferability can be developed when more wildfire cases with different seasons and areas are added in the future.

Multiple Linear Analysis for Generating Parametric Images of Irreversible Radiotracer (비가역 방사성추적자 파라메터 영상을 위한 다중선형분석법)

  • Kim, Su-Jin;Lee, Jae-Sung;Lee, Won-Woo;Kim, Yu-Kyeong;Jang, Sung-June;Son, Kyu-Ri;Kim, Hyo-Cheol;Chung, Jin-Wook;Lee, Dong-Soo
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.4
    • /
    • pp.317-325
    • /
    • 2007
  • Purpose: Biological parameters can be quantified using dynamic PET data with compartment modeling and Nonlinear Least Square (NLS) estimation. However, the generation of parametric images using the NLS is not appropriate because of the initial value problem and excessive computation time. In irreversible model, Patlak graphical analysis (PGA) has been commonly used as an alternative to the NLS method. In PGA, however, the start time ($t^*$, time where linear phase starts) has to be determined. In this study, we suggest a new Multiple Linear Analysis for irreversible radiotracer (MLAIR) to estimate fluoride bone influx rate (Ki). Methods: $[^{18}F]Fluoride$ dynamic PET scans was acquired for 60 min in three normal mini-pigs. The plasma input curve was derived using blood sampling from the femoral artery. Tissue time-activity curves were measured by drawing region of interests (ROls) on the femur head, vertebra, and muscle. Parametric images of Ki were generated using MLAIR and PGA methods. Result: In ROI analysis, estimated Ki values using MLAIR and PGA method was slightly higher than those of NLS, but the results of MLAIR and PGA were equivalent. Patlak slopes (Ki) were changed with different $t^*$ in low uptake region. Compared with PGA, the quality of parametric image was considerably improved using new method. Conclusion: The results showed that the MLAIR was efficient and robust method for the generation of Ki parametric image from $[^{18}F]Fluoride$ PET. It will be also a good alternative to PGA for the radiotracers with irreversible three compartment model.

Statics corrections for shallow seismic refraction data (천부 굴절법 탄성파 탐사 자료의 정보정)

  • Palmer Derecke;Nikrouz Ramin;Spyrou Andreur
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.1
    • /
    • pp.7-17
    • /
    • 2005
  • The determination of seismic velocities in refractors for near-surface seismic refraction investigations is an ill-posed problem. Small variations in the computed time parameters can result in quite large lateral variations in the derived velocities, which are often artefacts of the inversion algorithms. Such artefacts are usually not recognized or corrected with forward modelling. Therefore, if detailed refractor models are sought with model based inversion, then detailed starting models are required. The usual source of artefacts in seismic velocities is irregular refractors. Under most circumstances, the variable migration of the generalized reciprocal method (GRM) is able to accommodate irregular interfaces and generate detailed starting models of the refractor. However, where the very-near-surface environment of the Earth is also irregular, the efficacy of the GRM is reduced, and weathering corrections can be necessary. Standard methods for correcting for surface irregularities are usually not practical where the very-near-surface irregularities are of limited lateral extent. In such circumstances, the GRM smoothing statics method (SSM) is a simple and robust approach, which can facilitate more-accurate estimates of refractor velocities. The GRM SSM generates a smoothing 'statics' correction by subtracting an average of the time-depths computed with a range of XY values from the time-depths computed with a zero XY value (where the XY value is the separation between the receivers used to compute the time-depth). The time-depths to the deeper target refractors do not vary greatly with varying XY values, and therefore an average is much the same as the optimum value. However, the time-depths for the very-near-surface irregularities migrate laterally with increasing XY values and they are substantially reduced with the averaging process. As a result, the time-depth profile averaged over a range of XY values is effectively corrected for the near-surface irregularities. In addition, the time-depths computed with a Bero XY value are the sum of both the near-surface effects and the time-depths to the target refractor. Therefore, their subtraction generates an approximate 'statics' correction, which in turn, is subtracted from the traveltimes The GRM SSM is essentially a smoothing procedure, rather than a deterministic weathering correction approach, and it is most effective with near-surface irregularities of quite limited lateral extent. Model and case studies demonstrate that the GRM SSM substantially improves the reliability in determining detailed seismic velocities in irregular refractors.

Southeast Asian Hindu Art from the 6th to the 7th Centuries (6-7세기의 동남아 힌두 미술 - 인도 힌두미술의 전파와 초기의 변용 -)

  • Kang, Heejung
    • The Southeast Asian review
    • /
    • v.20 no.3
    • /
    • pp.263-297
    • /
    • 2010
  • The relics of the Southeast Asian civilizations in the first phase are found with the relics from India, China, and even further West of Persia and Rome. These relics are the historic marks of the ancient interactions of various continents, mainly through the maritime trade. The traces of the indic culture, which appears in the historic age, are represented in the textual records and arts, regarded as the essence of the India itself. The ancient Hindu arts found in various locations of Southeast Asia were thought to be transplanted directly from India. However, Neither did the Gupta Hindu Art of India form the mainstream of the Gupta Art, nor did it play an influential role in the adjacent areas. The Indian culture was transmitted to Southeast Asia rather intermittently than consistently. If we thoroughly compare the early Hindu art of India and that of Southeast Asia, we can find that the latter was influenced by the former, but still sustained Southeast Asian originality. The reason that the earliest Southeast Asian Hindu art is discovered mostly in continental Southeast Asia is resulted from the fact that the earliest networks between India and the region were constructed in this region. Among the images of Hindu gods produced before the 7th century are Shiva, Vishnu, Harihara, and Skanda(the son of Shiva), and Ganesha(the god of wealth). The earliest example of Vishnu was sculpted according to the Kushan style. After that, most of the sculptures came to have robust figures and graceful proportions. There are a small number of images of Ganesha and Skanda. These images strictly follow the iconography of the Indian sculpture. This shows that Southeast Asians chose their own Hindu gods from the Hindu pantheon selectively and devoted their faiths to them. Their basic iconography obediently followed the Indian model, but they tried to transform parts of the images within the Southeast Asian contexts. However, it is very difficult to understand the process of the development of the Hindu faith and its contents in the ancient Southeast Asia. It is because there are very few undamaged Hindu temples left in Southeast Asia. It is also difficult to make sure that the Hindu religion of India, which was based on the complex rituals and the caste system, was transplanted to Southeast Asia, because there were no such strong basis of social structure and religion in the region. "Indianization" is an organized expansion of the Indian culture based on the sense of belonging to an Indian context. This can be defined through the process of transmission and progress of the Hindu or Buddhist religions, legends about purana, and the influx of various epic expression and its development. Such conditions are represented through the Sanskrit language and the art. It is the element of the Indian culture to fabricate an image of god as a devotional object. However, if we look into details of the iconography, style, and religious culture, these can be understood as a "selective reception of foreign religious culture." There were no sophisticated social structure yet to support the Indian culture to continue in Southeast Asia around the 7th century. Whether this phenomena was an "Indianization" or the "influx of elements of Indian culture," it was closely related to the matter of 'localization.' The regional character of each local region in Southeast Asia is partially shown after the 8th century. However it is not clear whether this culture was settled in each region as its dominant culture. The localization of the Indian culture in Southeast Asia which acted as a network connecting ports or cities was a part of the process of localization of Indian culture in pan-Southeast Asian region, and the process of the building of the basis for establishing an identity for each Southeast Asian region.

Retrieval of Hourly Aerosol Optical Depth Using Top-of-Atmosphere Reflectance from GOCI-II and Machine Learning over South Korea (GOCI-II 대기상한 반사도와 기계학습을 이용한 남한 지역 시간별 에어로졸 광학 두께 산출)

  • Seyoung Yang;Hyunyoung Choi;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.933-948
    • /
    • 2023
  • Atmospheric aerosols not only have adverse effects on human health but also exert direct and indirect impacts on the climate system. Consequently, it is imperative to comprehend the characteristics and spatiotemporal distribution of aerosols. Numerous research endeavors have been undertaken to monitor aerosols, predominantly through the retrieval of aerosol optical depth (AOD) via satellite-based observations. Nonetheless, this approach primarily relies on a look-up table-based inversion algorithm, characterized by computationally intensive operations and associated uncertainties. In this study, a novel high-resolution AOD direct retrieval algorithm, leveraging machine learning, was developed using top-of-atmosphere reflectance data derived from the Geostationary Ocean Color Imager-II (GOCI-II), in conjunction with their differences from the past 30-day minimum reflectance, and meteorological variables from numerical models. The Light Gradient Boosting Machine (LGBM) technique was harnessed, and the resultant estimates underwent rigorous validation encompassing random, temporal, and spatial N-fold cross-validation (CV) using ground-based observation data from Aerosol Robotic Network (AERONET) AOD. The three CV results consistently demonstrated robust performance, yielding R2=0.70-0.80, RMSE=0.08-0.09, and within the expected error (EE) of 75.2-85.1%. The Shapley Additive exPlanations(SHAP) analysis confirmed the substantial influence of reflectance-related variables on AOD estimation. A comprehensive examination of the spatiotemporal distribution of AOD in Seoul and Ulsan revealed that the developed LGBM model yielded results that are in close concordance with AERONET AOD over time, thereby confirming its suitability for AOD retrieval at high spatiotemporal resolution (i.e., hourly, 250 m). Furthermore, upon comparing data coverage, it was ascertained that the LGBM model enhanced data retrieval frequency by approximately 8.8% in comparison to the GOCI-II L2 AOD products, ameliorating issues associated with excessive masking over very illuminated surfaces that are often encountered in physics-based AOD retrieval processes.

A Study on the Born Global Venture Corporation's Characteristics and Performance ('본글로벌(born global)전략'을 추구하는 벤처기업의 특성과 성과에 관한 연구)

  • Kim, Hyung-Jun;Jung, Duk-Hwa
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.3
    • /
    • pp.39-59
    • /
    • 2007
  • The international involvement of a firm has been described as a gradual development process "a process in which the enterprise gradually increases its international involvement in many studies. This process evolves in the interplay between the development of knowledge about foreign markets and operations on one hand and increasing commitment of resources to foreign markets on the other." On the basis of Uppsala internationalization model, many studies strengthen strong theoretical and empirical support. According to the predictions of the classic stages theory, the internationalization process of firms have been recognized and characterized gradual evolution to foreign markets, so called stage theory: indirect & direct export, strategic alliance and foreign direct investment. However, termed "international new ventures" (McDougall, Shane, and Oviatt 1994), "born globals" (Knight 1997; Knight and Cavusgil 1996; Madsen and Servais 1997), "instant internationals" (Preece, Miles, and Baetz 1999), or "global startups" (Oviatt and McDougall 1994) have been used and come into spotlight in internationalization study of technology intensity venture companies. Recent researches focused on venture company have suggested the phenomenons of 'born global' firms as a contradiction to the stages theory. Especially the article by Oviatt and McDougall threw the spotlight on international entrepreneurs, on international new ventures, and on their importance in the globalising world economy. Since venture companies have, by definition. lack of economies of scale, lack of resources (financial and knowledge), and aversion to risk taking, they have a difficulty in expanding their market to abroad and pursue internalization gradually and step by step. However many venture companies have pursued 'Born Global Strategy', which is different from process strategy, because corporate's environment has been rapidly changing to globalization. The existing studies investigate that (1) why the ventures enter into overseas market in those early stage, even in infancy, (2) what make the different international strategy among ventures and the born global strategy is better to the infant ventures. However, as for venture's performance(growth and profitability), the existing results do not correspond each other. They also, don't include marketing strategy (differentiation, low price, market breadth and market pioneer) that is important factors in studying of BGV's performance. In this paper I aim to delineate the appearance of international new ventures and the phenomenons of venture companies' internationalization strategy. In order to verify research problems, I develop a resource-based model and marketing strategies for analyzing the effects of the born global venture firms. In this paper, I suggested 3 research problems. First, do the korean venture companies take some advantages in the aspects of corporate's performances (growth, profitability and overall market performances) when they pursue internationalization from inception? Second, do the korean BGV have firm specific assets (foreign experiences, foreign orientation, organizational absorptive capacity)? Third, What are the marketing strategies of korean BGV and is it different from others? Under these problems, I test then (1) whether the BGV that a firm started its internationalization activity almost from inception, has more intangible resources(foreign experience of corporate members, foreign orientation, technological competences and absorptive capacity) than any other venture firms(Non_BGV) and (2) also whether the BGV's marketing strategies-differentiation, low price, market diversification and preemption strategy are different from Non_BGV. Above all, the main purpose of this research is that results achieved by BGV are indeed better than those obtained by Non_BGV firms with respect to firm's growth rate and efficiency. To do this research, I surveyed venture companies located in Seoul and Deajeon in Korea during November to December, 2005. I gather the data from 200 venture companies and then selected 84 samples, which have been founded during 1999${\sim}$2000. To compare BGV's characteristics with those of Non_BGV, I also had to classify BGV by export intensity over 50% among five or six aged venture firms. Many other researches tried to classify BGV and Non_BGV, but there were various criterion as many as researchers studied on this topic. Some of them use time gap, which is time difference of establishment and it's first internationalization experience and others use export intensity, ration of export sales amount divided by total sales amount. Although using a mixed criterion of prior research in my case, I do think this kinds of criterion is subjective and arbitrary rather than objective, so I do mention my research has some critical limitation in the classification of BGV and Non_BGV. The first purpose of research is the test of difference of performance between BGV and Non_BGV. As a result of t-test, the research show that there are statistically efficient difference not only in the growth rate (sales growth rate compared to competitors and 3 years averaged sales growth rate) but also in general market performance of BGV. But in case of profitability performance, the hypothesis that is BGV is more profit (return on investment(ROI) compared to competitors and 3 years averaged ROI) than Non-BGV was not supported. From these results, this paper concludes that BGV grows rapidly and gets a high market performance (in aspect of market share and customer loyalty) but there is no profitability difference between BGV and Non_BGV. The second result is that BGV have more absorptive capacity especially, knowledge competence, and entrepreneur's international experience than Non_BGV. And this paper also found BGV search for product differentiation, exemption strategy and market diversification strategy while Non_BGV search for low price strategy. These results have never been dealt with other existing studies. This research has some limitations. First limitation is concerned about the definition of BGV, as I mentioned above. Conceptually speaking, BGV is defined as company pursue internationalization from inception, but in empirical study, it's very difficult to classify between BGV and Non_BGV. I tried to classify on the basis of time difference and export intensity, this criterions are so subjective and arbitrary that the results are not robust if the criterion were changed. Second limitation is concerned about sample used in this research. I surveyed venture companies just located in Seoul and Daejeon and also use only 84 samples which more or less provoke sample bias problem and generalization of results. I think the more following studies that focus on ventures located in other region, the better to verify the results of this paper.

  • PDF

The Effect of Common Features on Consumer Preference for a No-Choice Option: The Moderating Role of Regulatory Focus (재몰유선택적정황하공동특성대우고객희호적영향(在没有选择的情况下共同特性对于顾客喜好的影响): 조절초점적조절작용(调节焦点的调节作用))

  • Park, Jong-Chul;Kim, Kyung-Jin
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.89-97
    • /
    • 2010
  • This study researches the effects of common features on a no-choice option with respect to regulatory focus theory. The primary interest is in three factors and their interrelationship: common features, no-choice option, and regulatory focus. Prior studies have compiled vast body of research in these areas. First, the "common features effect" has been observed bymany noted marketing researchers. Tversky (1972) proposed the seminal theory, the EBA model: elimination by aspect. According to this theory, consumers are prone to focus only on unique features during comparison processing, thereby dismissing any common features as redundant information. Recently, however, more provocative ideas have attacked the EBA model by asserting that common features really do affect consumer judgment. Chernev (1997) first reported that adding common features mitigates the choice gap because of the increasing perception of similarity among alternatives. Later, however, Chernev (2001) published a critically developed study against his prior perspective with the proposition that common features may be a cognitive load to consumers, and thus consumers are possible that they are prone to prefer the heuristic processing to the systematic processing. This tends to bring one question to the forefront: Do "common features" affect consumer choice? If so, what are the concrete effects? This study tries to answer the question with respect to the "no-choice" option and regulatory focus. Second, some researchers hold that the no-choice option is another best alternative of consumers, who are likely to avoid having to choose in the context of knotty trade-off settings or mental conflicts. Hope for the future also may increase the no-choice option in the context of optimism or the expectancy of a more satisfactory alternative appearing later. Other issues reported in this domain are time pressure, consumer confidence, and alternative numbers (Dhar and Nowlis 1999; Lin and Wu 2005; Zakay and Tsal 1993). This study casts the no-choice option in yet another perspective: the interactive effects between common features and regulatory focus. Third, "regulatory focus theory" is a very popular theme in recent marketing research. It suggests that consumers have two focal goals facing each other: promotion vs. prevention. A promotion focus deals with the concepts of hope, inspiration, achievement, or gain, whereas prevention focus involves duty, responsibility, safety, or loss-aversion. Thus, while consumers with a promotion focus tend to take risks for gain, the same does not hold true for a prevention focus. Regulatory focus theory predicts consumers' emotions, creativity, attitudes, memory, performance, and judgment, as documented in a vast field of marketing and psychology articles. The perspective of the current study in exploring consumer choice and common features is a somewhat creative viewpoint in the area of regulatory focus. These reviews inspire this study of the interaction possibility between regulatory focus and common features with a no-choice option. Specifically, adding common features rather than omitting them may increase the no-choice option ratio in the choice setting only to prevention-focused consumers, but vice versa to promotion-focused consumers. The reasoning is that when prevention-focused consumers come in contact with common features, they may perceive higher similarity among the alternatives. This conflict among similar options would increase the no-choice ratio. Promotion-focused consumers, however, are possible that they perceive common features as a cue of confirmation bias. And thus their confirmation processing would make their prior preference more robust, then the no-choice ratio may shrink. This logic is verified in two experiments. The first is a $2{\times}2$ between-subject design (whether common features or not X regulatory focus) using a digital cameras as the relevant stimulus-a product very familiar to young subjects. Specifically, the regulatory focus variable is median split through a measure of eleven items. Common features included zoom, weight, memory, and battery, whereas the other two attributes (pixel and price) were unique features. Results supported our hypothesis that adding common features enhanced the no-choice ratio only to prevention-focus consumers, not to those with a promotion focus. These results confirm our hypothesis - the interactive effects between a regulatory focus and the common features. Prior research had suggested that including common features had a effect on consumer choice, but this study shows that common features affect choice by consumer segmentation. The second experiment was used to replicate the results of the first experiment. This experimental study is equal to the prior except only two - priming manipulation and another stimulus. For the promotion focus condition, subjects had to write an essay using words such as profit, inspiration, pleasure, achievement, development, hedonic, change, pursuit, etc. For prevention, however, they had to use the words persistence, safety, protection, aversion, loss, responsibility, stability etc. The room for rent had common features (sunshine, facility, ventilation) and unique features (distance time and building state). These attributes implied various levels and valence for replication of the prior experiment. Our hypothesis was supported repeatedly in the results, and the interaction effects were significant between regulatory focus and common features. Thus, these studies showed the dual effects of common features on consumer choice for a no-choice option. Adding common features may enhance or mitigate no-choice, contradictory as it may sound. Under a prevention focus, adding common features is likely to enhance the no-choice ratio because of increasing mental conflict; under the promotion focus, it is prone to shrink the ratio perhaps because of a "confirmation bias." The research has practical and theoretical implications for marketers, who may need to consider common features carefully in a practical display context according to consumer segmentation (i.e., promotion vs. prevention focus.) Theoretically, the results suggest some meaningful moderator variable between common features and no-choice in that the effect on no-choice option is partly dependent on a regulatory focus. This variable corresponds not only to a chronic perspective but also a situational perspective in our hypothesis domain. Finally, in light of some shortcomings in the research, such as overlooked attribute importance, low ratio of no-choice, or the external validity issue, we hope it influences future studies to explore the little-known world of the "no-choice option."

The Impacts of Smoking Bans on Smoking in Korea (금연법 강화가 흡연에 미치는 영향)

  • Kim, Beomsoo;Kim, Ahram
    • KDI Journal of Economic Policy
    • /
    • v.31 no.2
    • /
    • pp.127-153
    • /
    • 2009
  • There is a growing concern about potential harmful effect of second-hand or environmental tobacco smoking. As a result, smoking bans in workplace become more prevalent worldwide. In Korea, workplace smoking ban policy become more restrictive in 2003 when National health enhancing law was amended. The new law requires all office buildings larger than 3,000 square meters (multi-purpose buildings larger than 2,000 square meters) should be smoke free. Therefore, a lot of indoor office became non smoking area. Previous studies in other counties often found contradicting answers for the effects of workplace smoking ban on smoking behavior. In addition, there was no study in Korea yet that examines the causal impacts of smoking ban on smoking behavior. The situation in Korea might be different from other countries. Using 2001 and 2005 Korea National Health and Nutrition surveys which are representative for population in Korea we try to examine the impacts of law change on current smoker and cigarettes smoked per day. The amended law impacted the whole country at the same time and there was a declining trend in smoking rate even before the legislation update. So, the challenge here is to tease out the true impact only. We compare indoor working occupations which are constrained by the law change with outdoor working occupations which are less impacted. Since the data has been collected before (2001) and after (2005) the law change for treated (indoor working occupations) and control (outdoor working occupations) groups we will use difference in difference method. We restrict our sample to working age (between 20 and 65) since these are the relevant population by the workplace smoking ban policy. We also restrict the sample to indoor occupations (executive or administrative and administrative support) and outdoor occupations (sales and low skilled worker) after dropping unemployed and someone working for military since it is not clear whether these occupations are treated group or control group. This classification was supported when we examined the answers for workplace smoking ban policy existing only in 2005 survey. Sixty eight percent of indoor occupations reported having an office smoking ban policy compared to forty percent of outdoor occupation answering workplace smoking ban policy. The estimated impacts on current smoker are 4.1 percentage point decline and cigarettes per day show statistically significant decline of 2.5 cigarettes per day. Taking into account consumption of average sixteen cigarettes per day among smokers it is sixteen percent decline in smoking rate which is substantial. We tested robustness using the same sample across two surveys and also using tobit model. Our results are robust against both concerns. It is possible that our measure of treated and control group have measurement error which will lead to attenuation bias. However, we are finding statistically significant impacts which might be a lower bound of the true estimates. The magnitude of our finding is not much different from previous finding of significant impacts. For cigarettes per day previous estimates varied from 1.37 to 3.9 and for current smoker it showed between 1%p and 7.8%p.

  • PDF