• Title/Summary/Keyword: Performance Bias

Search Result 983, Processing Time 0.028 seconds

A Comparative Study on Data Augmentation Using Generative Models for Robust Solar Irradiance Prediction

  • Jinyeong Oh;Jimin Lee;Daesungjin Kim;Bo-Young Kim;Jihoon Moon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.29-42
    • /
    • 2023
  • In this paper, we propose a method to enhance the prediction accuracy of solar irradiance for three major South Korean cities: Seoul, Busan, and Incheon. Our method entails the development of five generative models-vanilla GAN, CTGAN, Copula GAN, WGANGP, and TVAE-to generate independent variables that mimic the patterns of existing training data. To mitigate the bias in model training, we derive values for the dependent variables using random forests and deep neural networks, enriching the training datasets. These datasets are integrated with existing data to form comprehensive solar irradiance prediction models. The experimentation revealed that the augmented datasets led to significantly improved model performance compared to those trained solely on the original data. Specifically, CTGAN showed outstanding results due to its sophisticated mechanism for handling the intricacies of multivariate data relationships, ensuring that the generated data are diverse and closely aligned with the real-world variability of solar irradiance. The proposed method is expected to address the issue of data scarcity by augmenting the training data with high-quality synthetic data, thereby contributing to the operation of solar power systems for sustainable development.

Simultaneous Estimation of the Fat Fraction and R2* Via T2*-Corrected 6-Echo Dixon Volumetric Interpolated Breath-hold Examination Imaging for Osteopenia and Osteoporosis Detection: Correlations with Sex, Age, and Menopause

  • Donghyun Kim;Sung Kwan Kim;Sun Joo Lee;Hye Jung Choo;Jung Won Park;Kun Yung Kim
    • Korean Journal of Radiology
    • /
    • v.20 no.6
    • /
    • pp.916-930
    • /
    • 2019
  • Objective: To investigate the relationships of T2*-corrected 6-echo Dixon volumetric interpolated breath-hold examination (VIBE) imaging-based fat fraction (FF) and R2* values with bone mineral density (BMD); determine their associations with sex, age, and menopause; and evaluate the diagnostic performance of the FF and R2* for predicting osteopenia and osteoporosis. Materials and Methods: This study included 153 subjects who had undergone magnetic resonance (MR) imaging, including MR spectroscopy (MRS) and T2*-corrected 6-echo Dixon VIBE imaging. The FF and R2* were measured at the L4 vertebra. The male and female groups were divided into two subgroups according to age or menopause. Lin's concordance and Pearson's correlation coefficients, Bland-Altman 95% limits of agreement, and the area under the curve (AUC) were calculated. Results: The correlation between the spectroscopic and 6-echo Dixon VIBE imaging-based FF values was statistically significant for both readers (pc = 0.940 [reader 1], 0.908 [reader 2]; both p < 0.001). A small measurement bias was observed for the MRS-based FF for both readers (mean difference = -0.3% [reader 1], 0.1% [reader 2]). We found a moderate negative correlation between BMD and the FF (r = -0.411 [reader 1], -0.436 [reader 2]; both p <0.001) with younger men and premenopausal women showing higher correlations. R2* and BMD were more significantly correlated in women than in men, and the highest correlation was observed in postmenopausal women (r = 0.626 [reader 1], 0.644 [reader 2]; both p < 0.001). For predicting osteopenia and osteoporosis, the FF had a higher AUC in men and R2* had a higher AUC in women. The AUC for predicting osteoporosis was highest with a combination of the FF and R2* in postmenopausal women (AUC = 0.872 [reader 1], 0.867 [reader 2]; both p < 0.001). Conclusion: The FF and R2* measured using T2*-corrected 6-echo Dixon VIBE imaging can serve as predictors of osteopenia and osteoporosis. R2* might be useful for predicting osteoporosis, especially in postmenopausal women.

Analysis of Water System Impacts of Effluent from Agricultural and Industrial Complex Wastewater Treatment Facilities Using Numerical Analysis and Water Quality Modeling: A Method for Selecting an Appropriate Model (수치해석 및 수질모델링을 활용한 농공단지 폐수처리시설 방류수의 수계영향성 분석 및 적정 모델의 선정방안 제시)

  • Lee, In-Koo;Kang, Soon-Ah
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.5
    • /
    • pp.77-92
    • /
    • 2024
  • This study investigates the impact of aluminum and water pollutants present in the effluent from wastewater treatment plants associated with agricultural and industrial facilities in the Geumgang waterway. The average aluminum concentration in the effluent from these facilities ranged from 0.19-3.14 mg/L. The R2 values between the predicted and measured values at the confluence were 0.9237 and 0.9758, respectively, with an average aluminum concentration of 0.051 mg/L observed 10 km downstream. During the model calibration process, QUALKO2 outperformed QUAL2E, showing percent bias values of less than 11.9% and 18.9% for BOD and Chl-a, respectively. However, both models demonstrated similar performance in predicting T-N and T-P concentrations, as they both consider abiotic organic nitrogen and phosphorus. In conclusion, the impact of aluminum and water pollutants in the effluent from the Geumgang River agricultural and industrial complex wastewater treatment plant is relatively low. Nonetheless, for the safety of drinking water, continuous monitoring and proper maintenance of aluminum levels and water quality are essential.

Evaluating the prediction models of leaf wetness duration for citrus orchards in Jeju, South Korea (제주 감귤 과수원에서의 이슬지속시간 예측 모델 평가)

  • Park, Jun Sang;Seo, Yun Am;Kim, Kyu Rang;Ha, Jong-Chul
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.20 no.3
    • /
    • pp.262-276
    • /
    • 2018
  • Models to predict Leaf Wetness Duration (LWD) were evaluated using the observed meteorological and dew data at the 11 citrus orchards in Jeju, South Korea from 2016 to 2017. The sensitivity and the prediction accuracy were evaluated with four models (i.e., Number of Hours of Relative Humidity (NHRH), Classification And Regression Tree/Stepwise Linear Discriminant (CART/SLD), Penman-Monteith (PM), Deep-learning Neural Network (DNN)). The sensitivity of models was evaluated with rainfall and seasonal changes. When the data in rainy days were excluded from the whole data set, the LWD models had smaller average error (Root Mean Square Error (RMSE) about 1.5hours). The seasonal error of the DNN model had the similar magnitude (RMSE about 3 hours) among all seasons excluding winter. The other models had the greatest error in summer (RMSE about 9.6 hours) and the lowest error in winter (RMSE about 3.3 hours). These models were also evaluated by the statistical error analysis method and the regression analysis method of mean squared deviation. The DNN model had the best performance by statistical error whereas the CART/SLD model had the worst prediction accuracy. The Mean Square Deviation (MSD) is a method of analyzing the linearity of a model with three components: squared bias (SB), nonunity slope (NU), and lack of correlation (LC). Better model performance was determined by lower SB and LC and higher NU. The results of MSD analysis indicated that the DNN model would provide the best performance and followed by the PM, the NHRH and the CART/SLD in order. This result suggested that the machine learning model would be useful to improve the accuracy of agricultural information using meteorological data.

Evaluation of the Simulated PM2.5 Concentrations using Air Quality Forecasting System according to Emission Inventories - Focused on China and South Korea (대기질 예보 시스템의 입력 배출목록에 따른 PM2.5 모의 성능 평가 - 중국 및 한국을 중심으로)

  • Choi, Ki-Chul;Lim, Yongjae;Lee, Jae-Bum;Nam, Kipyo;Lee, Hansol;Lee, Yonghee;Myoung, Jisu;Kim, Taehee;Jang, Limseok;Kim, Jeong Soo;Woo, Jung-Hun;Kim, Soontae;Choi, Kwang-Ho
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.34 no.2
    • /
    • pp.306-320
    • /
    • 2018
  • Emission inventory is the essential component for improving the performance of air quality forecasting system. This study evaluated the simulated daily mean $PM_{2.5}$ concentrations in South Korea and China for 1-year period (Sept. 2016~Aug. 2017) using air quality forecasting system which was applied by the emission inventory of E2015 (predicted CAPSS 2015 for South Korea and KORUS 2015 v1 for the other regions). To identify the impacts of emissions on the simulated $PM_{2.5}$, the emission inventory replaced by E2010 (CAPSS 2010 and MIX 2010) were also applied under the same forecasting conditions. These results showed that simulated daily mean $PM_{2.5}$ concentrations had generally suitable performance with both emission data-sets for China (IOA>0.87, R>0.87) and South Korea (IOA>0.84, R>0.76). The impacts of the changes in emission inventories on simulated daily mean $PM_{2.5}$ concentrations were quantitatively estimated. In China, normalized mean bias (NMB) showed 5.5% and 26.8% under E2010 and E2015, respectively. The tendency of overestimated concentrations was larger in North Central and Southeast China than other regions under both E2010 and E2015. Seasonal differences of NMB were higher in non-winter season (28.3% (E2010)~39.3% (E2015)) than winter season (-0.5% (E2010)~8.0% (E2015)). In South Korea, NMB showed -5.4% and 2.8% for all days, but -15.2% and -11.2% for days below $40{\mu}g/m^3$ to minimize the impacts of long-range transport under E2010 and E2015, respectively. For all days, simulated $PM_{2.5}$ concentrations were overestimated in Seoul, Incheon, Southern part of Gyeonggi and Daejeon, and underestimated in other regions such as Jeonbuk, Ulsan, Busan and Gyeongnam, regardless of what emission inventories were applied. Our results suggest that the updated emission inventory, which reflects current status of emission amounts and spatio-temporal allocations, is needed for improving the performance of air quality forecasting.

The Competition Policy and Major Industrial Policy-Making in the 1980's (1980년대 주요산업정책(主要産業政策) 결정(決定)과 경쟁정책(競爭政策): 역할(役割)과 한계(限界))

  • Choi, Jong-won
    • KDI Journal of Economic Policy
    • /
    • v.13 no.2
    • /
    • pp.97-127
    • /
    • 1991
  • This paper investigates the roles and the limitations of the Korean antitrust agencies-the Office of Fair Trade (OFT) and the Fair Trade Commission (FTC) during the making of the major industrial policies of the 1980's. The Korean antitrust agencies played only a minimal role in three major industrial policy-making issues in the 1980's- the enactment of the Industrial Development Act (IDA), the Industrial Rationalization Measures according to the IDA, and the Industrial Readjustment Measures on Consolidation of Large Insolvent Enterprises based on the revised Tax Exemption and Reduction Control Act. As causes for this performance bias in the Korean antitrust system, this paper considers five factors according to the current literature on implementation failure: ambiguous and insufficient statutory provisions of the Monopoly Regulation and Fair Trade Act (MRFTA); lack of resources; biased attitudes and motivations of the staff of the OFT and the FTC; bureaucratic incapability; and widespread misunderstanding about the roles and functions of the antitrust system in Korea. Among these five factors, bureaucratic incompetence and lack of understanding in various policy implementation environments about the roles and functions of the antitrust system have been regarded as the most important ones. Most staff members did not have enough educational training during their school years to engage in antitrust and fair trade policy-making. Furthermore, the high rate of staff turnover due to a mandatory personnel transfer system has prohibited the accumulation of knowledge and skills required for pursuing complicated structural antitrust enforcement. The limited capability of the OFT has put the agency in a disadvantaged position in negotiating with other economic ministries. The OFT has not provided plausible counter-arguments based on sound economic theories against other economic ministries' intensive market interventions in the name of rationalization and readjustment of industries. If the staff members of antitrust agencies have lacked substantive understanding of the antitrust and fair trade policy, the rest of government agencies must have had serious problems in understanding the correst roles and functions of the antitrust system. The policy environment of the Korean antitrust system, including other economic ministries, the Deputy Prime Minister, and President Chun, have tended to conceptualize the OFT more as an agency aiming only at fair trade policy and less as an agency that should enforce structural monopoly regulation as well. Based on this assessment of the performance of the Korean antitrust system, this paper evaluate current reform proposals for the MRFT A. The inclusion of the regulation of conglomerate mergers and of business divestiture orders may be a desirable revision, giving the MRFTA more complete provisions. However, given deficient staff experties and the unfavorable policy environments, it would be too optimistic and naive to expect that the inclusion of these provisions alone could improve the performance of the Korean antitrust system. In its conclusion, this paper suggests several policy recommendations for the Korean antitrust system, which would secure the stable development and accumulation of antitrust expertise for its staff members and enough understanding and conformity from its environments about its antitrust goals and functions.

  • PDF

DEVELOPMENT OF SAFETY-BASED LEVEL-OF-SERVICE CRITERIA FOR ISOLATED SIGNALIZED INTERSECTIONS (독립신호 교차로에서의 교통안전을 위한 서비스수준 결정방법의 개발)

  • Dr. Tae-Jun Ha
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.3-32
    • /
    • 1995
  • The Highway Capacity Manual specifies procedures for evaluating intersection performance in terms of delay per vehicle. What is lacking in the current methodology is a comparable quantitative procedure for ass~ssing the safety-based level of service provided to motorists. The objective of the research described herein was to develop a computational procedure for evaluating the safety-based level of service of signalized intersections based on the relative hazard of alternative intersection designs and signal timing plans. Conflict opportunity models were developed for those crossing, diverging, and stopping maneuvers which are associated with left-turn and rear-end accidents. Safety¬based level-of-service criteria were then developed based on the distribution of conflict opportunities computed from the developed models. A case study evaluation of the level of service analysis methodology revealed that the developed safety-based criteria were not as sensitive to changes in prevailing traffic, roadway, and signal timing conditions as the traditional delay-based measure. However, the methodology did permit a quantitative assessment of the trade-off between delay reduction and safety improvement. The Highway Capacity Manual (HCM) specifies procedures for evaluating intersection performance in terms of a wide variety of prevailing conditions such as traffic composition, intersection geometry, traffic volumes, and signal timing (1). At the present time, however, performance is only measured in terms of delay per vehicle. This is a parameter which is widely accepted as a meaningful and useful indicator of the efficiency with which an intersection is serving traffic needs. What is lacking in the current methodology is a comparable quantitative procedure for assessing the safety-based level of service provided to motorists. For example, it is well¬known that the change from permissive to protected left-turn phasing can reduce left-turn accident frequency. However, the HCM only permits a quantitative assessment of the impact of this alternative phasing arrangement on vehicle delay. It is left to the engineer or planner to subjectively judge the level of safety benefits, and to evaluate the trade-off between the efficiency and safety consequences of the alternative phasing plans. Numerous examples of other geometric design and signal timing improvements could also be given. At present, the principal methods available to the practitioner for evaluating the relative safety at signalized intersections are: a) the application of engineering judgement, b) accident analyses, and c) traffic conflicts analysis. Reliance on engineering judgement has obvious limitations, especially when placed in the context of the elaborate HCM procedures for calculating delay. Accident analyses generally require some type of before-after comparison, either for the case study intersection or for a large set of similar intersections. In e.ither situation, there are problems associated with compensating for regression-to-the-mean phenomena (2), as well as obtaining an adequate sample size. Research has also pointed to potential bias caused by the way in which exposure to accidents is measured (3, 4). Because of the problems associated with traditional accident analyses, some have promoted the use of tqe traffic conflicts technique (5). However, this procedure also has shortcomings in that it.requires extensive field data collection and trained observers to identify the different types of conflicts occurring in the field. The objective of the research described herein was to develop a computational procedure for evaluating the safety-based level of service of signalized intersections that would be compatible and consistent with that presently found in the HCM for evaluating efficiency-based level of service as measured by delay per vehicle (6). The intent was not to develop a new set of accident prediction models, but to design a methodology to quantitatively predict the relative hazard of alternative intersection designs and signal timing plans.

  • PDF

A 13b 100MS/s 0.70㎟ 45nm CMOS ADC for IF-Domain Signal Processing Systems (IF 대역 신호처리 시스템 응용을 위한 13비트 100MS/s 0.70㎟ 45nm CMOS ADC)

  • Park, Jun-Sang;An, Tai-Ji;Ahn, Gil-Cho;Lee, Mun-Kyo;Go, Min-Ho;Lee, Seung-Hoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.46-55
    • /
    • 2016
  • This work proposes a 13b 100MS/s 45nm CMOS ADC with a high dynamic performance for IF-domain high-speed signal processing systems based on a four-step pipeline architecture to optimize operating specifications. The SHA employs a wideband high-speed sampling network properly to process high-frequency input signals exceeding a sampling frequency. The SHA and MDACs adopt a two-stage amplifier with a gain-boosting technique to obtain the required high DC gain and the wide signal-swing range, while the amplifier and bias circuits use the same unit-size devices repeatedly to minimize device mismatch. Furthermore, a separate analog power supply voltage for on-chip current and voltage references minimizes performance degradation caused by the undesired noise and interference from adjacent functional blocks during high-speed operation. The proposed ADC occupies an active die area of $0.70mm^2$, based on various process-insensitive layout techniques to minimize the physical process imperfection effects. The prototype ADC in a 45nm CMOS demonstrates a measured DNL and INL within 0.77LSB and 1.57LSB, with a maximum SNDR and SFDR of 64.2dB and 78.4dB at 100MS/s, respectively. The ADC is implemented with long-channel devices rather than minimum channel-length devices available in this CMOS technology to process a wide input range of $2.0V_{PP}$ for the required system and to obtain a high dynamic performance at IF-domain input signal bands. The ADC consumes 425.0mW with a single analog voltage of 2.5V and two digital voltages of 2.5V and 1.1V.

Search for an archaic form of Jain-Danoje - Focucing on 'Yeowonmoo' and 'Hojanggut' - (자인단오제의 고형(古形)에 관한 탐색 - '여원무'와 '호장굿'을 중심으로 -)

  • Han, Yang-myung
    • (The) Research of the performance art and culture
    • /
    • no.19
    • /
    • pp.5-33
    • /
    • 2009
  • Jain-Danoje's course since modern is not different with almost all of folk performances, which were restored and reconstructed with a background of the designation of an intangible cultural heritage and National folk arts contest sine the 1960s. Generally, these folk performances were decontextualized in course of extinction and reappearance, and recontextualized in course of new directions on tradition. Also, the performances were interpreted differently and transformed by the main constituents of reappearance. Jain-Danoje nowadays has a regular form just at that time that has been designated as a cultural heritage at 1970s. But, today's Jain-Danoje is clearly different with the last appearance in 1936 and some Literature and jainhyun-eupji. I think such differences would stems from the process of reproduction. From this perspective, I had investigate Old literature and the early days report, and the current text. Especially, I will show the considerable change which has been occurred in the Yeowonmu and Hojanggut, the central role to configure that identity, by comparing past and today. As a result of consideration, today's form of the Yeowonmu and Hojanggut are created texts that mind the designation of an intangible cultural heritage and National folk arts contest. These texts has been reproduced without understanding about structure and current of folk festival and state of performance which has been transmitted on premodern society. some intellectuals search for an archaic form of Jain-Danoje based on jainhyun-eupji that created in 1895, except the other jainhyun-eupji. Moreover, because of the understanding with a bias, they can't grasp the meaning about the religious service for Hanjanggun, and they can't see the facts of Yeowonmoo. In addition, they were aware of 'o-sin' that led by Hojang as a fancy dress parade in a carnival, and that is recognized as a component of Jain-Danoje, so there was other text which is different from our own festival.

The Effect of Common Features on Consumer Preference for a No-Choice Option: The Moderating Role of Regulatory Focus (재몰유선택적정황하공동특성대우고객희호적영향(在没有选择的情况下共同特性对于顾客喜好的影响): 조절초점적조절작용(调节焦点的调节作用))

  • Park, Jong-Chul;Kim, Kyung-Jin
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.89-97
    • /
    • 2010
  • This study researches the effects of common features on a no-choice option with respect to regulatory focus theory. The primary interest is in three factors and their interrelationship: common features, no-choice option, and regulatory focus. Prior studies have compiled vast body of research in these areas. First, the "common features effect" has been observed bymany noted marketing researchers. Tversky (1972) proposed the seminal theory, the EBA model: elimination by aspect. According to this theory, consumers are prone to focus only on unique features during comparison processing, thereby dismissing any common features as redundant information. Recently, however, more provocative ideas have attacked the EBA model by asserting that common features really do affect consumer judgment. Chernev (1997) first reported that adding common features mitigates the choice gap because of the increasing perception of similarity among alternatives. Later, however, Chernev (2001) published a critically developed study against his prior perspective with the proposition that common features may be a cognitive load to consumers, and thus consumers are possible that they are prone to prefer the heuristic processing to the systematic processing. This tends to bring one question to the forefront: Do "common features" affect consumer choice? If so, what are the concrete effects? This study tries to answer the question with respect to the "no-choice" option and regulatory focus. Second, some researchers hold that the no-choice option is another best alternative of consumers, who are likely to avoid having to choose in the context of knotty trade-off settings or mental conflicts. Hope for the future also may increase the no-choice option in the context of optimism or the expectancy of a more satisfactory alternative appearing later. Other issues reported in this domain are time pressure, consumer confidence, and alternative numbers (Dhar and Nowlis 1999; Lin and Wu 2005; Zakay and Tsal 1993). This study casts the no-choice option in yet another perspective: the interactive effects between common features and regulatory focus. Third, "regulatory focus theory" is a very popular theme in recent marketing research. It suggests that consumers have two focal goals facing each other: promotion vs. prevention. A promotion focus deals with the concepts of hope, inspiration, achievement, or gain, whereas prevention focus involves duty, responsibility, safety, or loss-aversion. Thus, while consumers with a promotion focus tend to take risks for gain, the same does not hold true for a prevention focus. Regulatory focus theory predicts consumers' emotions, creativity, attitudes, memory, performance, and judgment, as documented in a vast field of marketing and psychology articles. The perspective of the current study in exploring consumer choice and common features is a somewhat creative viewpoint in the area of regulatory focus. These reviews inspire this study of the interaction possibility between regulatory focus and common features with a no-choice option. Specifically, adding common features rather than omitting them may increase the no-choice option ratio in the choice setting only to prevention-focused consumers, but vice versa to promotion-focused consumers. The reasoning is that when prevention-focused consumers come in contact with common features, they may perceive higher similarity among the alternatives. This conflict among similar options would increase the no-choice ratio. Promotion-focused consumers, however, are possible that they perceive common features as a cue of confirmation bias. And thus their confirmation processing would make their prior preference more robust, then the no-choice ratio may shrink. This logic is verified in two experiments. The first is a $2{\times}2$ between-subject design (whether common features or not X regulatory focus) using a digital cameras as the relevant stimulus-a product very familiar to young subjects. Specifically, the regulatory focus variable is median split through a measure of eleven items. Common features included zoom, weight, memory, and battery, whereas the other two attributes (pixel and price) were unique features. Results supported our hypothesis that adding common features enhanced the no-choice ratio only to prevention-focus consumers, not to those with a promotion focus. These results confirm our hypothesis - the interactive effects between a regulatory focus and the common features. Prior research had suggested that including common features had a effect on consumer choice, but this study shows that common features affect choice by consumer segmentation. The second experiment was used to replicate the results of the first experiment. This experimental study is equal to the prior except only two - priming manipulation and another stimulus. For the promotion focus condition, subjects had to write an essay using words such as profit, inspiration, pleasure, achievement, development, hedonic, change, pursuit, etc. For prevention, however, they had to use the words persistence, safety, protection, aversion, loss, responsibility, stability etc. The room for rent had common features (sunshine, facility, ventilation) and unique features (distance time and building state). These attributes implied various levels and valence for replication of the prior experiment. Our hypothesis was supported repeatedly in the results, and the interaction effects were significant between regulatory focus and common features. Thus, these studies showed the dual effects of common features on consumer choice for a no-choice option. Adding common features may enhance or mitigate no-choice, contradictory as it may sound. Under a prevention focus, adding common features is likely to enhance the no-choice ratio because of increasing mental conflict; under the promotion focus, it is prone to shrink the ratio perhaps because of a "confirmation bias." The research has practical and theoretical implications for marketers, who may need to consider common features carefully in a practical display context according to consumer segmentation (i.e., promotion vs. prevention focus.) Theoretically, the results suggest some meaningful moderator variable between common features and no-choice in that the effect on no-choice option is partly dependent on a regulatory focus. This variable corresponds not only to a chronic perspective but also a situational perspective in our hypothesis domain. Finally, in light of some shortcomings in the research, such as overlooked attribute importance, low ratio of no-choice, or the external validity issue, we hope it influences future studies to explore the little-known world of the "no-choice option."