• Title/Summary/Keyword: likelihood

Search Result 4,131, Processing Time 0.034 seconds

The Effects of Evaluation Attributes of Cultural Tourism Festivals on Satisfaction and Behavioral Intention (문화관광축제 방문객의 평가속성 만족과 행동의도에 관한 연구 - 2006 광주김치대축제를 중심으로 -)

  • Kim, Jung-Hoon
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.2
    • /
    • pp.55-73
    • /
    • 2007
  • Festivals are an indispensable feature of cultural tourism(Formica & Uysal, 1998). Cultural tourism festivals are increasingly being used as instruments promoting tourism and boosting the regional economy. So much research related to festivals is undertaken from a variety of perspectives. Plans to revisit a particular festival have been viewed as an important research topic both in academia and the tourism industry. Therefore festivals have frequently been leveled as cultural events. Cultural tourism festivals have become a crucial component in constituting the attractiveness of tourism destinations(Prentice, 2001). As a result, a considerable number of tourist studies have been carried out in diverse cultural tourism festivals(Backman et al., 1995; Crompton & Mckay, 1997; Park, 1998; Clawson & Knetch, 1996). Much of previous literature empirically shows the close linkage between tourist satisfaction and behavioral intention in festivals. The main objective of this study is to investigate the effects of evaluation attributes of cultural tourism festivals on satisfaction and behavioral intention. accomplish the research objective, to find out evaluation items of cultural tourism festivals through the literature study an empirical study. Using a varimax rotation with Kaiser normalization, the research obtained four factors in the 18 evaluation attributes of cultural tourism festivals. Some empirical studies have examined the relationship between behavioral intention and actual behavior. To understand between tourist satisfaction and behavioral intention, this study suggests five hypotheses and hypothesized model. In this study, the analysis is based on primary data collected from visitors who participated in '2006 Gwangju Kimchi Festival'. In total, 700 self-administered questionnaires were distributed and 561 usable questionnaires were obtained. Respondents were presented with the 18 satisfactions item on a scale from 1(strongly disagree) to 7(strongly agree). Dimensionality and stability of the scale were evaluated by a factor analysis with varimax rotation. Four factors emerged with eigenvalues greater than 1, which explained 66.40% of the total variance and Cronbach' alpha raging from 0.876 to 0.774. And four factors named: advertisement and guides, programs, food and souvenirs, and convenient facilities. To test and estimate the hypothesized model, a two-step approach with an initial measurement model and a subsequent structural model for Structural Equation Modeling was used. The AMOS 4.0 analysis package was used to conduct the analysis. In estimating the model, the maximum likelihood procedure was used.In this study Chi-square test is used, which is the most common model goodness-of-fit test. In addition, considering the literature about the Structural Equation Modeling, this study used, besides Chi-square test, more model fit indexes to determine the tangibility of the suggested model: goodness-of-fit index(GFI) and root mean square error of approximation(RMSEA) as absolute fit indexes; normed-fit index(NFI) and non-normed-fit index(NNFI) as incremental fit indexes. The results of T-test and ANOVAs revealed significant differences(0.05 level), therefore H1(Tourist Satisfaction level should be different from Demographic traits) are supported. According to the multiple Regressions analysis and AMOS, H2(Tourist Satisfaction positively influences on revisit intention), H3(Tourist Satisfaction positively influences on word of mouth), H4(Evaluation Attributes of cultural tourism festivals influences on Tourist Satisfaction), and H5(Tourist Satisfaction positively influences on Behavioral Intention) are also supported. As the conclusion of this study are as following: First, there were differences in satisfaction levels in accordance with the demographic information of visitors. Not all visitors had the same degree of satisfaction with their cultural tourism festival experience. Therefore it is necessary to understand the satisfaction of tourists if the experiences that are provided are to meet their expectations. So, in making festival plans, the organizer should consider the demographic variables in explaining and segmenting visitors to cultural tourism festival. Second, satisfaction with attributes of evaluation cultural tourism festivals had a significant direct impact on visitors' intention to revisit such festivals and the word of mouth publicity they shared. The results indicated that visitor satisfaction is a significant antecedent of their intention to revisit such festivals. Festival organizers should strive to forge long-term relationships with the visitors. In addition, it is also necessary to understand how the intention to revisit a festival changes over time and identify the critical satisfaction factors. Third, it is confirmed that behavioral intention was enhanced by satisfaction. The strong link between satisfaction and behavioral intentions of visitors areensured by high quality advertisement and guides, programs, food and souvenirs, and convenient facilities. Thus, examining revisit intention from a time viewpoint may be of a great significance for both practical and theoretical reasons. Additionally, festival organizers should give special attention to visitor satisfaction, as satisfied visitors are more likely to return sooner. The findings of this research have several practical implications for the festivals managers. The promotion of cultural festivals should be based on the understanding of tourist satisfaction for the long- term success of tourism. And this study can help managers carry out this task in a more informed and strategic manner by examining the effects of demographic traits on the level of tourist satisfaction and the behavioral intention. In other words, differentiated marketing strategies should be stressed and executed by relevant parties. The limitations of this study are as follows; the results of this study cannot be generalized to other cultural tourism festivals because we have not explored the many different kinds of festivals. A future study should be a comparative analysis of other festivals of different visitor segments. Also, further efforts should be directed toward developing more comprehensive temporal models that can explain behavioral intentions of tourists.

  • PDF

Development of Quantification Methods for the Myocardial Blood Flow Using Ensemble Independent Component Analysis for Dynamic $H_2^{15}O$ PET (동적 $H_2^{15}O$ PET에서 앙상블 독립성분분석법을 이용한 심근 혈류 정량화 방법 개발)

  • Lee, Byeong-Il;Lee, Jae-Sung;Lee, Dong-Soo;Kang, Won-Jun;Lee, Jong-Jin;Kim, Soo-Jin;Choi, Seung-Jin;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.6
    • /
    • pp.486-491
    • /
    • 2004
  • Purpose: factor analysis and independent component analysis (ICA) has been used for handling dynamic image sequences. Theoretical advantages of a newly suggested ICA method, ensemble ICA, leaded us to consider applying this method to the analysis of dynamic myocardial $H_2^{15}O$ PET data. In this study, we quantified patients' blood flow using the ensemble ICA method. Materials and Methods: Twenty subjects underwent $H_2^{15}O$ PET scans using ECAT EXACT 47 scanner and myocardial perfusion SPECT using Vertex scanner. After transmission scanning, dynamic emission scans were initiated simultaneously with the injection of $555{\sim}740$ MBq $H_2^{15}O$. Hidden independent components can be extracted from the observed mixed data (PET image) by means of ICA algorithms. Ensemble learning is a variational Bayesian method that provides an analytical approximation to the parameter posterior using a tractable distribution. Variational approximation forms a lower bound on the ensemble likelihood and the maximization of the lower bound is achieved through minimizing the Kullback-Leibler divergence between the true posterior and the variational posterior. In this study, posterior pdf was approximated by a rectified Gaussian distribution to incorporate non-negativity constraint, which is suitable to dynamic images in nuclear medicine. Blood flow was measured in 9 regions - apex, four areas in mid wall, and four areas in base wall. Myocardial perfusion SPECT score and angiography results were compared with the regional blood flow. Results: Major cardiac components were separated successfully by the ensemble ICA method and blood flow could be estimated in 15 among 20 patients. Mean myocardial blood flow was $1.2{\pm}0.40$ ml/min/g in rest, $1.85{\pm}1.12$ ml/min/g in stress state. Blood flow values obtained by an operator in two different occasion were highly correlated (r=0.99). In myocardium component image, the image contrast between left ventricle and myocardium was 1:2.7 in average. Perfusion reserve was significantly different between the regions with and without stenosis detected by the coronary angiography (P<0.01). In 66 segment with stenosis confirmed by angiography, the segments with reversible perfusion decrease in perfusion SPECT showed lower perfusion reserve values in $H_2^{15}O$ PET. Conclusions: Myocardial blood flow could be estimated using an ICA method with ensemble learning. We suggest that the ensemble ICA incorporating non-negative constraint is a feasible method to handle dynamic image sequence obtained by the nuclear medicine techniques.

Prediction of Salvaged Myocardium in Patients with Acute Myocardial Infarction after Primary Percutaneous Coronary Angioplasty using early Thallium-201 Redistribution Myocardial Perfusion Imaging (급성심근경색증의 일차적 관동맥성형술 후 조기 Tl-201 재분포영상을 이용한 구조심근 예측)

  • Choi, Joon-Young;Yang, You-Jung;Choi, Seung-Jin;Yeo, Jeong-Seok;Park, Seong-Wook;Song, Jae-Kwan;Moon, Dae-Hyuk
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.4
    • /
    • pp.219-228
    • /
    • 2003
  • Purpose: The amount of salvaged myocardium is an important prognostic factor in patients with acute myocardial infarction (MI). We investigated if early Tl-201 SPECT imaging could be used to predict the salvaged myocardium and functional recovery in acute MI after primary PTCA. Materials and Methods: In 36 patients with first acute MI treated with primary PTCA, serial echocardiography and Tl-201 SPECT imaging ($5.8{\pm}2.1$ days after PTDA) were performed. Regional wall motion and perfusion were quantified with on 16-segment myocardial model with 5-point and 4-point scaling system, respectively. Results: Wall motion was improved in 78 of the 212 dyssynergic segments on 1 month follow-up echocardiography and 97 on 7 months follow-up echocardiography, which were proved to be salvaged myocardium. The areas under receiver operating characteristic curves of Tl-201 perfusion score for detecting salvaged myocardial segments were 0.79 for 1 month follow-up and 0.83 for 7 months follow-up. The sensitivity and specificity of Tl-201 redistribution images with optimum cutoff of 40% of peak thallium activity for detecting salvaged myocardium were 84.6% and 55.2% for 1 month follow-up, and 87.6% and 64.3% for 7 months follow-up, respectively. There was a linear relationship between the percentage of peak thallium activity on early redistribution imaging and the likelihood of segmental functional improvement 7 months after reperfusion. Conclusion: Tl-201 myocardial perfusion SPECT imaging performed early within 10 days after reperfusion can be used to predict the salvaged myocardium and functional recovery with high sensitivity during the 7 months following primary PTCA in patients with acute MI.

A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

  • Kim, Sun-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.19-32
    • /
    • 2010
  • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.

A Study of Guidelines for Genetic Counseling in Preimplantation Genetic Diagnosis (PGD) (착상전 유전진단을 위한 유전상담 현황과 지침개발을 위한 기초 연구)

  • Kim, Min-Jee;Lee, Hyoung-Song;Kang, Inn-Soo;Jeong, Seon-Yong;Kim, Hyon-J.
    • Journal of Genetic Medicine
    • /
    • v.7 no.2
    • /
    • pp.125-132
    • /
    • 2010
  • Purpose: Preimplantation genetic diagnosis (PGD), also known as embryo screening, is a pre-pregnancy technique used to identify genetic defects in embryos created through in vitro fertilization. PGD is considered a means of prenatal diagnosis of genetic abnormalities. PGD is used when one or both genetic parents has a known genetic abnormality; testing is performed on an embryo to determine if it also carries the genetic abnormality. The main advantage of PGD is the avoidance of selective pregnancy termination as it imparts a high likelihood that the baby will be free of the disease under consideration. The application of PGD to genetic practices, reproductive medicine, and genetic counseling is becoming the key component of fertility practice because of the need to develop a custom PGD design for each couple. Materials and Methods: In this study, a survey on the contents of genetic counseling in PGD was carried out via direct contact or e-mail with the patients and specialists who had experienced PGD during the three months from February to April 2010. Results: A total of 91 persons including 60 patients, 49 of whom had a chromosomal disorder and 11 of whom had a single gene disorder, and 31 PGD specialists responded to the survey. Analysis of the survey results revealed that all respondents were well aware of the importance of genetic counseling in all steps of PGD including planning, operation, and follow-up. The patient group responded that the possibility of unexpected results (51.7%), genetic risk assessment and recurrence risk (46.7%), the reproduction options (46.7%), the procedure and limitation of PGD (43.3%) and the information of PGD technology (35.0%) should be included as a genetic counseling information. In detail, 51.7% of patients wanted to be counseled for the possibility of unexpected results and the recurrence risk, while 46.7% wanted to know their reproduction options (46.7%). Approximately 96.7% of specialists replied that a non-M.D. genetic counselor is necessary for effective and systematic genetic counseling in PGD because it is difficult for physicians to offer satisfying information to patients due to lack of counseling time and specific knowledge of the disorders. Conclusions: The information from the survey provides important insight into the overall present situation of genetic counseling for PGD in Korea. The survey results demonstrated that there is a general awareness that genetic counseling is essential for PGD, suggesting that appropriate genetic counseling may play a important role in the success of PGD. The establishment of genetic counseling guidelines for PGD may contribute to better planning and management strategies for PGD.

Effects of Nitrogen , Phosphorus and Potassium Application Rates on Oversown Hilly Pasture under Different Levels of Inclination II. Changes on the properties, chemical composition, uptake and recovery of mineral nutrients in mixed grass/clover sward (경사도별 3요소시용 수준이 겉뿌림 산지초지에 미치는 영향 II. 토양특성 , 목초의 무기양분함량 및 3요소 이용율의 변화)

  • 정연규;이종열
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.5 no.3
    • /
    • pp.200-206
    • /
    • 1985
  • This field experiment was undertaken to assess the effects of three levels of inclination ($10^{\circ},\;20^{\circ},\;and\;30^{\circ}$) and four rates of $N-P_2O_5-K_2O$ (0-0-0-, 14-10-10, 28-25-25, and 42-40-40kg/10a) on establishment, yield and quality, and botanical compositions of mixed grass-clover sward. This second part is concerned with the soil chemical properties, concentrations and uptake of mineral nutrients, and percent recovery and efficiency of NPK. The results obtained after a two-year experiment are summarized as follows: 1. The pH, exchangeable Mg and Na, and base saturation in the surface soils were decreased by increasing the grade of inclination, whereas organic matter and available $P_2O_5$ tended to be increased. However, the changes in the Ca content and equivalent ratio of $K\sqrt{Ca+Mg}$ were not significant. The pH, exchangeable Ca and Mg, and base saturation were reduced by increasing the NPK rate, whereas available $P_2O_5$, exchangeable K, and equivalent ratio of $K\sqrt{Ca+Mg}$ tended to be increased. 2. The concentrations of mineral nutrients in grasses and weeds were not significantly affected by increasing the grade of slope in hilly pasture, whereas the concentrations of N, K, and Mg in legume were the lowest with the steep slope, which seemed to be related to the low legume yield. The Mg concentrations of all forage species were below the critical level for good forage growth and likelihood of grass tetany. 3. The increase of NPK rate resulted in the increment of N, K and Na concentrations, and the decrease of Mg and Ca in grasses. The P concentration was increased with P application, but there were no differences in that among the P rates applied. It resulted also in a slight increase of K, and a decrease of Mg in legume, but the contents of N, Ca, and Na were not affected by that. On the other hand, it has not affected the mineral contents in weeds except a somewhat increase of N. The mixed forages showed a increase of N and K contents, a decrease of Ca and Mg, and a slight change in P and Na. 4. The percent recovery of N, P and K by mixed forages were greatly decreased by increasing the grade of inclination and NPK rate. They were high in the order; K>N>P. The efficiency of mixed NPK applications was decreased by that. The efficiency of mixed NPK fertilizers absorbed was slightly decreased by the increased rate of NPK, but it was not affected by the grade of inclination.

  • PDF

Spatial effect on the diffusion of discount stores (대형할인점 확산에 대한 공간적 영향)

  • Joo, Young-Jin;Kim, Mi-Ae
    • Journal of Distribution Research
    • /
    • v.15 no.4
    • /
    • pp.61-85
    • /
    • 2010
  • Introduction: Diffusion is process by which an innovation is communicated through certain channel overtime among the members of a social system(Rogers 1983). Bass(1969) suggested the Bass model describing diffusion process. The Bass model assumes potential adopters of innovation are influenced by mass-media and word-of-mouth from communication with previous adopters. Various expansions of the Bass model have been conducted. Some of them proposed a third factor affecting diffusion. Others proposed multinational diffusion model and it stressed interactive effect on diffusion among several countries. We add a spatial factor in the Bass model as a third communication factor. Because of situation where we can not control the interaction between markets, we need to consider that diffusion within certain market can be influenced by diffusion in contiguous market. The process that certain type of retail extends is a result that particular market can be described by the retail life cycle. Diffusion of retail has pattern following three phases of spatial diffusion: adoption of innovation happens in near the diffusion center first, spreads to the vicinity of the diffusing center and then adoption of innovation is completed in peripheral areas in saturation stage. So we expect spatial effect to be important to describe diffusion of domestic discount store. We define a spatial diffusion model using multinational diffusion model and apply it to the diffusion of discount store. Modeling: In this paper, we define a spatial diffusion model and apply it to the diffusion of discount store. To define a spatial diffusion model, we expand learning model(Kumar and Krishnan 2002) and separate diffusion process in diffusion center(market A) from diffusion process in the vicinity of the diffusing center(market B). The proposed spatial diffusion model is shown in equation (1a) and (1b). Equation (1a) is the diffusion process in diffusion center and equation (1b) is one in the vicinity of the diffusing center. $$\array{{S_{i,t}=(p_i+q_i{\frac{Y_{i,t-1}}{m_i}})(m_i-Y_{i,t-1})\;i{\in}\{1,{\cdots},I\}\;(1a)}\\{S_{j,t}=(p_j+q_j{\frac{Y_{j,t-1}}{m_i}}+{\sum\limits_{i=1}^I}{\gamma}_{ij}{\frac{Y_{i,t-1}}{m_i}})(m_j-Y_{j,t-1})\;i{\in}\{1,{\cdots},I\},\;j{\in}\{I+1,{\cdots},I+J\}\;(1b)}}$$ We rise two research questions. (1) The proposed spatial diffusion model is more effective than the Bass model to describe the diffusion of discount stores. (2) The more similar retail environment of diffusing center with that of the vicinity of the contiguous market is, the larger spatial effect of diffusing center on diffusion of the vicinity of the contiguous market is. To examine above two questions, we adopt the Bass model to estimate diffusion of discount store first. Next spatial diffusion model where spatial factor is added to the Bass model is used to estimate it. Finally by comparing Bass model with spatial diffusion model, we try to find out which model describes diffusion of discount store better. In addition, we investigate the relationship between similarity of retail environment(conceptual distance) and spatial factor impact with correlation analysis. Result and Implication: We suggest spatial diffusion model to describe diffusion of discount stores. To examine the proposed spatial diffusion model, 347 domestic discount stores are used and we divide nation into 5 districts, Seoul-Gyeongin(SG), Busan-Gyeongnam(BG), Daegu-Gyeongbuk(DG), Gwan- gju-Jeonla(GJ), Daejeon-Chungcheong(DC), and the result is shown

    . In a result of the Bass model(I), the estimates of innovation coefficient(p) and imitation coefficient(q) are 0.017 and 0.323 respectively. While the estimate of market potential is 384. A result of the Bass model(II) for each district shows the estimates of innovation coefficient(p) in SG is 0.019 and the lowest among 5 areas. This is because SG is the diffusion center. The estimates of imitation coefficient(q) in BG is 0.353 and the highest. The imitation coefficient in the vicinity of the diffusing center such as BG is higher than that in the diffusing center because much information flows through various paths more as diffusion is progressing. A result of the Bass model(II) shows the estimates of innovation coefficient(p) in SG is 0.019 and the lowest among 5 areas. This is because SG is the diffusion center. The estimates of imitation coefficient(q) in BG is 0.353 and the highest. The imitation coefficient in the vicinity of the diffusing center such as BG is higher than that in the diffusing center because much information flows through various paths more as diffusion is progressing. In a result of spatial diffusion model(IV), we can notice the changes between coefficients of the bass model and those of the spatial diffusion model. Except for GJ, the estimates of innovation and imitation coefficients in Model IV are lower than those in Model II. The changes of innovation and imitation coefficients are reflected to spatial coefficient(${\gamma}$). From spatial coefficient(${\gamma}$) we can infer that when the diffusion in the vicinity of the diffusing center occurs, the diffusion is influenced by one in the diffusing center. The difference between the Bass model(II) and the spatial diffusion model(IV) is statistically significant with the ${\chi}^2$-distributed likelihood ratio statistic is 16.598(p=0.0023). Which implies that the spatial diffusion model is more effective than the Bass model to describe diffusion of discount stores. So the research question (1) is supported. In addition, we found that there are statistically significant relationship between similarity of retail environment and spatial effect by using correlation analysis. So the research question (2) is also supported.

  • PDF
  • The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

    • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
      • Journal of Intelligence and Information Systems
      • /
      • v.27 no.1
      • /
      • pp.83-102
      • /
      • 2021
    • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

    The Effect of Price Promotional Information about Brand on Consumer's Quality Perception: Conditioning on Pretrial Brand (품패개격촉소신식대소비자질량인지적영향(品牌价格促销信息对消费者质量认知的影响))

    • Lee, Min-Hoon;Lim, Hang-Seop
      • Journal of Global Scholars of Marketing Science
      • /
      • v.19 no.3
      • /
      • pp.17-27
      • /
      • 2009
    • Price promotion typically reduces the price for a given quantity or increases the quantity available at the same price, thereby enhancing value and creating an economic incentive to purchase. It often is used to encourage product or service trial among nonusers of products or services. Thus, it is important to understand the effects of price promotions on quality perception made by consumer who do not have prior experience with the promoted brand. However, if consumers associate a price promotion itself with inferior brand quality, the promotion may not achieve the sales increase the economic incentives otherwise might have produced. More specifically, low qualitative perception through price promotion will undercut the economic and psychological incentives and reduce the likelihood of purchase. Thus, it is important for marketers to understand how price promotional informations about a brand have impact on consumer's unfavorable quality perception of the brand. Previous literatures on the effects of price promotions on quality perception reveal inconsistent explanations. Some focused on the unfavorable effect of price promotion on consumer's perception. But others showed that price promotions didn't raise unfavorable perception on the brand. Prior researches found these inconsistent results related to the timing of the price promotion's exposure and quality evaluation relative to trial. And, whether the consumer has been experienced with the product promotions in the past or not may moderate the effects. A few studies considered differences among product categories as fundamental factors. The purpose of this research is to investigate the effect of price promotional informations on consumer's unfavorable quality perception under the different conditions. The author controlled the timing of the promotional exposure and varied past promotional patterns and information presenting patterns. Unlike previous researches, the author examined the effects of price promotions setting limit to pretrial situation by controlling potentially moderating effects of prior personal experience with the brand. This manipulations enable to resolve possible controversies in relation to this issue. And this manipulation is meaningful for the work sector. Price promotion is not only used to target existing consumers but also to encourage product or service trial among nonusers of products or services. Thus, it is important for marketers to understand how price promotional informations about a brand have impact on consumer's unfavorable quality perception of the brand. If consumers associate a price promotion itself with inferior quality about unused brand, the promotion may not achieve the sales increase the economic incentives otherwise might have produced. In addition, if the price promotion ends, the consumer that have purchased that certain brand will likely to display sharply decreased repurchasing behavior. Through a literature review, hypothesis 1 was set as follows to investigate the adjustive effect of past price promotion on quality perception made by consumers; The influence that price promotion of unused brand have on quality perception made by consumers will be adjusted by past price promotion activity of the brand. In other words, a price promotion of an unused brand that have not done a price promotion in the past will have a unfavorable effect on quality perception made by consumer. Hypothesis 2-1 was set as follows : When an unused brand undertakes price promotion for the first time, the information presenting pattern of price promotion will have an effect on the consumer's attribution for the cause of the price promotion. Hypothesis 2-2 was set as follows : The more consumer dispositionally attribute the cause of price promotion, the more unfavorable the quality perception made by consumer will be. Through test 1, the subjects were given a brief explanation of the product and the brand before they were provided with a $2{\times}2$ factorial design that has 4 patterns of price promotion (presence or absence of past price promotion * presence or absence of current price promotion) and the explanation describing the price promotion pattern of each cell. Then the perceived quality of imaginary brand WAVEX was evaluated in the scale of 7. The reason tennis racket was chosen is because the selected product group must have had almost no past price promotions to eliminate the influence of average frequency of promotion on the value of price promotional information as Raghubir and Corfman (1999) pointed out. Test 2 was also carried out on students of the same management faculty of test 1 with tennis racket as the product group. As with test 1, subjects with average familiarity for the product group and low familiarity for the brand was selected. Each subjects were assigned to one of the two cells representing two different information presenting patterns of price promotion of WAVEX (case where the reason behind price promotion was provided/case where the reason behind price promotion was not provided). Subjects looked at each promotional information before evaluating the perceived quality of the brand WAVEX in the scale of 7. The effect of price promotion for unfamiliar pretrial brand on consumer's perceived quality was proved to be moderated with the presence or absence of past price promotion. The consistency with past promotional behavior is important variable that makes unfavorable effect on brand evaluations get worse. If the price promotion for the brand has never been carried out before, price promotion activity may have more unfavorable effects on consumer's quality perception. Second, when the price promotion of unfamiliar pretrial brand was executed for the first time, presenting method of informations has impact on consumer's attribution for the cause of firm's promotion. And the unfavorable effect of quality perception is higher when the consumer does dispositional attribution comparing with situational attribution. Unlike the previous studies where the main focus was the absence or presence of favorable or unfavorable motivation from situational/dispositional attribution, the focus of this study was exaus ing the fact that a situational attribution can be inferred even if the consumer employs a dispositional attribution on the price promotional behavior, if the company provides a persuasive reason. Such approach, in academic perspectih sis a large significance in that it explained the anchoring and adjng ch approcedures by applying it to a non-mathematical problem unlike the previous studies where it wis ionaly explained by applying it to a mathematical problem. In other wordn, there is a highrspedency tmatispositionally attribute other's behaviors according to the fuedach aal attribution errors and when this is applied to the situation of price promotions, we can infer that consumers are likely tmatispositionally attribute the company's price promotion behaviors. Ha ever, even ueder these circumstances, the company can adjng the consumer's anchoring tmareduce the po wibiliute thdispositional attribution. Furthermore, unlike majority of previous researches on short/long-term effects of price promotion that only considered the effect of price promotions on consumer's purchasing behaviors, this research measured the effect on perceived quality, one of man elements that affects the purchasing behavior of consumers. These results carry useful implications for the work sector. A guideline of effectively providing promotional informations for a new brand can be suggested through the outcomes of this research. If the brand is to avoid false implications such as inferior quality while implementing a price promotion strategy, it must provide a clear and acceptable reasons behind the promotion. Especially it is more important for the company with no past price promotion to provide a clear reason. An inconsistent behavior can be the cause of consumer's distrust and anxiety. This is also one of the most important factor of risk of endless price wars. Price promotions without prior notice can buy doubt from consumers not market share.

    • PDF

    Diagnostic Approach to the Solitary Pulmonary Nodule : Reappraisal of the Traditional Clinical Parameters for Differentiating Malignant Nodule from Benign Nodule (고립성 폐결절에 대한 진단적 접근 : 악성결절과 양성결절의 감별 지표에 대한 재검토)

    • Kho, Won Jung;Kim, Cheol Hyeon;Jang, Seung Hun;Lee, Jae Ho;Yoo, Chul Gyu;Chung, Hee Soon;Kim, Young Whan;Han, Sung Koo;Shim, Young-Soo
      • Tuberculosis and Respiratory Diseases
      • /
      • v.43 no.4
      • /
      • pp.500-518
      • /
      • 1996
    • Background : The solitary pulmonary nodule(SPN) presents a diagnostic dilemma to the physician and the patient. Many clinical characteristics(i.e. age, smoking history, prior history of malignancy) and radiological characteristics( i.e. size, calcification, growth rate, several findings of computed tomography) have been proposed to help to determine whether the SPN was benign or malignant. However, most of these diagnostic guidelines are based on the data collected before computed tomography(CT) has been introduced and lung cancer was not as common as these days. Moreover, it is not well established whether these guidelines from western populations could be applicable to Korean patients. Methods : We had a retrospective analysis of the case records and radiographic findings in 114 patients presenting with SPN from Jan. 1994 to Feb. 1995 in Seoul National University Hospital, a tertiary referral hospital. Results : We observed the following results ; (1) Out of 113 SPNs, the etiology was documented in 94 SP IS. There were 34 benign SP s and 60 malignant SPNs. Among which, 49 SPNs were primary lung cancers and the most common hi stologic type was adenocarcinoma. (2) The average age of patients with benign and malignant SPNs was $49.7{\pm}12.0$ and $58.1{\pm}10.0$ years, respectively( p=0.0004), and the malignant SPNs had a striking linear propensity to increase with age. (3) No significant difference in the hi story of smoking was noted between the patients with benign SPNs($13.0{\pm}17.6$ pack- year) and those with malignant SPNs($18.6{\pm}25.1$ pack-year) (p=0.2108). (4) 9 out of 10 patients with prior history of malignancy had malignant SPNs. 5 were new primary lung cancers with no relation to prior malignancy. (5) The average size of benign SPNs($3.01{\pm}1.20cm$) and malignant SPNs($2.98{\pm}0.97cm$) was not significantly different(p=0.8937). (6) The volume doubling time could be calculated in 22 SPNs. 9 SPNs had the volume doubling time longer than 400 days. Out of these, 6 were malignant SPNs. (7) The CT findings suggesting malignancy included the lobulated or spiculated border, air- bronchogram, pleural tail, and lymphadenopathy. In contrast, calcification, central low attenuation, cavity with even thickness, well-marginated border, and peri nodular micronodules were more suggestive for benign nodule. (8) The diagnostic yield of percutaneous needle aspiration and biopsy was 57.6%(19/33) of benign SPNs and 81.0%(47/58) of malignant SPNs. The diagnostic value of sputum analysis and bronchoscopic evaluations were relatively very low. (9) 42.3%(11/26) of SPNs of undetermined etiology preoperatively turned out to be malignant after surgical resection. Overall, 75.4%(46/61) of surgically resected SPNs were malignant. Conclusions : We conclude that the likelihood of malignant SPN correlates the age of patient, prior history of malignancy, some CT findings including lobulated or spiculated border, air-bronchogram, pleural tail and lymphadenopathy. However, the history of smoking, the size of the nodule, and the volume doubling time are not helpful to determent whether the SPN is benign or malignant, which have been regarded as valuable clinical parameters previously. We suggest that aggressive diagnostic approach including surgical resection is necessary in patient with SPNs.

    • PDF