• Title/Summary/Keyword: Qualitative Models

Search Result 394, Processing Time 0.024 seconds

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

A Study on Public Interest-based Technology Valuation Models in Water Resources Field (수자원 분야 공익형 기술가치평가 시스템에 대한 연구)

  • Ryu, Seung-Mi;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.177-198
    • /
    • 2018
  • Recently, as economic property it has become necessary to acquire and utilize the framework for water resource measurement and performance management as the property of water resources changes to hold "public property". To date, the evaluation of water technology has been carried out by feasibility study analysis or technology assessment based on net present value (NPV) or benefit-to-cost (B/C) effect, however it is not yet systemized in terms of valuation models to objectively assess an economic value of technology-based business to receive diffusion and feedback of research outcomes. Therefore, K-water (known as a government-supported public company in Korea) company feels the necessity to establish a technology valuation framework suitable for technical characteristics of water resources fields in charge and verify an exemplified case applied to the technology. The K-water evaluation technology applied to this study, as a public interest goods, can be used as a tool to measure the value and achievement contributed to society and to manage them. Therefore, by calculating the value in which the subject technology contributed to the entire society as a public resource, we make use of it as a basis information for the advertising medium of performance on the influence effect of the benefits or the necessity of cost input, and then secure the legitimacy for large-scale R&D cost input in terms of the characteristics of public technology. Hence, K-water company, one of the public corporation in Korea which deals with public goods of 'water resources', will be able to establish a commercialization strategy for business operation and prepare for a basis for the performance calculation of input R&D cost. In this study, K-water has developed a web-based technology valuation model for public interest type water resources based on the technology evaluation system that is suitable for the characteristics of a technology in water resources fields. In particular, by utilizing the evaluation methodology of the Institute of Advanced Industrial Science and Technology (AIST) in Japan to match the expense items to the expense accounts based on the related benefit items, we proposed the so-called 'K-water's proprietary model' which involves the 'cost-benefit' approach and the FCF (Free Cash Flow), and ultimately led to build a pipeline on the K-water research performance management system and then verify the practical case of a technology related to "desalination". We analyze the embedded design logic and evaluation process of web-based valuation system that reflects characteristics of water resources technology, reference information and database(D/B)-associated logic for each model to calculate public interest-based and profit-based technology values in technology integrated management system. We review the hybrid evaluation module that reflects the quantitative index of the qualitative evaluation indices reflecting the unique characteristics of water resources and the visualized user-interface (UI) of the actual web-based evaluation, which both are appended for calculating the business value based on financial data to the existing web-based technology valuation systems in other fields. K-water's technology valuation model is evaluated by distinguishing between public-interest type and profitable-type water technology. First, evaluation modules in profit-type technology valuation model are designed based on 'profitability of technology'. For example, the technology inventory K-water holds has a number of profit-oriented technologies such as water treatment membranes. On the other hand, the public interest-type technology valuation is designed to evaluate the public-interest oriented technology such as the dam, which reflects the characteristics of public benefits and costs. In order to examine the appropriateness of the cost-benefit based public utility valuation model (i.e. K-water specific technology valuation model) presented in this study, we applied to practical cases from calculation of benefit-to-cost analysis on water resource technology with 20 years of lifetime. In future we will additionally conduct verifying the K-water public utility-based valuation model by each business model which reflects various business environmental characteristics.

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.

A Study on the Resilience Process of Persons with Disabilities (중도장애인의 레질리언스(Resilience) 과정에 관한 연구)

  • Kim, Mi-Ok
    • Korean Journal of Social Welfare
    • /
    • v.60 no.2
    • /
    • pp.99-129
    • /
    • 2008
  • This study analyzed the resilience process of persons with disabilities using the grounded theory approach. To conduct this study, the researcher conducted in-depth interviews with 8 persons with disabilities. In data analysis, this study identified 393 concepts on the resilience process of persons with disabilities and the concepts were categorized into 45 sub-categories and 18 primary categories. In the paradigm model on the resilience process of persons with disabilities, it was identified that casual conditions included 'unawareness of disability before being disability', 'extreme pain', 'repressing psychological pain', and the contingent conditions were 'dis-empowerment by staying in home', 'isolation by himself with difficulty in accepting the disability', 'experience of frustration from social barriers with prejudice against persons with disabilities'. Also, it was identified that the resilience process could be dependent on the type and the degree of the disability, the gender, and the length of time being disability. In spite of the casual and contingent conditions, the central way in which persons with disabilities could acquire resilience was identified as 'enhancement of the power of positive thinking'. The control conditions which accelerate or retard central phenomenon were 'the awareness of not being alone through family, friends, neighborhood and the social system' externally and 'finding purpose in life through religion and help from other persons with disabilities', internally. The action/interactional sequences enhanced the efforts, self searching and active acting, and as a result, persons with disabilities could find comfort in life, participate in society and change the perspective of disability in society. The core categories of resilience process in persons with disabilities were a belief in affirmation and choice of life by initiative. In the process analysis, stages developed in the following: 'pain', 'strangeness', 'reflection', 'daily life'. This stage was more continuous and causal than discrete and complete. In this process, the types of resilience of persons with disabilities are divided into 'existence reflection', 'course development', 'implicit endeavor', and 'active execution'. This study showed the details of the paradigm models, the process and types with an in-depth understanding of the resilience process of persons with disabilities using grounded theory as well as theory construction and policy and clinical involvement on the study of persons with disabilities.

  • PDF

Ecological Health Assessments on Turbidwater in the Downstream After a Construction of Yongdam Dam (용담댐 건설후 하류부 하천 생태계의 탁수영향 평가)

  • Kim, Ja-Hyun;Seo, Jin-Won;Na, Young-Eun;An, Kwang-Guk
    • Korean Journal of Ecology and Environment
    • /
    • v.40 no.1
    • /
    • pp.130-142
    • /
    • 2007
  • This study was to examine impacts of turbid water on fish community in the downstream of Yongdam Dam during the period from June to October 2006. For the research, we selected six sampling sites in the field: two sites were controls with no influences of turbid water from the dam and other remaining four sites were the stations for an assessment of potential turbid effects. We evaluated integrative health conditions throughout applications of various models such as necropsy-based fish health assessment model (FHA), Index of Biological Integrity (IBI) using fish assemblages, and Qualitative Habitat Evaluation Index (QHEI). Laboratory tests on fish exposure under 400 NTU were performed to find out impact of turbid water using scanning electron microscope (SEM). Results showed that fine solid particles were clogging in the gill in the treatments, while particles were not found in the control. This results indicate that when inorganic turbidity increases abruptedly, fish may have a mechanical abrasion or respiratory blocking. The stream health condition, based on the IBI values, ranged between 38 and 48 (average: 42), indicating a "excellent" or "good" condition after the criteria of US EPA (1993). In the mean time, physical habitat condition, based on the QHEI, ranged 97 to 187 (average 154), indicating a "suboptimal condition". These biological outcomes were compared with chemical dataset: IBI values were more correlated (r=0.526, p<0.05, n=18) with QHEI rather than chemical water quality, based on turbidity (r=0.260, p>0.05, n=18). Analysis of the FHA showed that the individual health indicated "excellent condition", while QHEI showed no habitat disturbances (especially bottom substrate and embeddeness), food-web, and spawning place. Consequently, we concluded that the ecological health in downstream of Yongdam Dam was not impacted by the turbid water.

A Study on Measures to Create Local Webtoon Ecosystem (지역웹툰 생태계 조성을 위한 방안 연구)

  • Choi, Sung-chun;Yoon, Ki-heon
    • Cartoon and Animation Studies
    • /
    • s.51
    • /
    • pp.181-201
    • /
    • 2018
  • The cartoon industry in Korea has continued to decline due to the contraction of published comics market and decrease in the number of comic books rental stores until the 2000s when it rapidly started to experience qualitative changes and quantitative growth due to the emergence of webtoon. The market size of webtoon industry, valued at 420 billion won in 2015, is expected to grow to 880.5 billion won by 2018. Notably, most cartoonists who draw cartoon strips are using digital devices and producing scripts in data, thereby overcoming the geographical, spatial and physical limitation of contents. As a result, a favorable environment for the creation of local ecosystems is generated. While the infrastructures of human resources are steadily growing by region, cartoon industries that are supported by the government policy have shown good performance combined with factors of creative infrastructures in local areas such as webtoon experience centers, webtoon campuses and webtoon creation centers, etc. Nevertheless, it is true that cartoon infrastructures are substantially based on a capital area which leads to an imbalanced structure of cartoon industry. To see the statistics, companies of offline cartoon business in Seoul and Gyeonggi Province make up 87%, except for distribution industry. In addition, companies of online cartoon business which are situated outside of Seoul and Gyeonggi Province form merely 7.5%. Studies and research on local webtoon are inadequate. The existing studies on local webtoon usually focus on its industrial and economic values, mentioning the word "local" only sometimes. Therefore, this study looked into the current status of local webtoon of the present time for the current state of local cartoon ecosystem, middle and long-term support from the government, and an alternative in the future. Main challenges include the expansion of opportunities to enjoy cartoon cultures, the independence of cartoon infrastructure, and the settlement of regionally specialized cartoon cultures. It means that, in order to enable the cartoon ecosystem to settle down in local areas, it is vital to utilize and link basic infrastructures. Furthermore, it is necessary to consider independence and autonomy beyond the limited support by the government. Finally, webtoon should be designated as a culture, which can be a new direction of the development of local webtoon. Furthermore, desirable models should be continuously researched and studied, which are suitable for each region and connect them with regional tourism, culture and art industry. It will allow the webtoon industry to soft land in the industry. Local webtoon, which is a growth engine of regions and main contents of the fourth industrial revolution, is expected to be a momentum for the decentralization of power and reindustrialization of regions.

A Study on Relationship between Physical Elements and Tennis/Golf Elbow

  • Choi, Jungmin;Park, Jungwoo;Kim, Hyunseung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.3
    • /
    • pp.183-196
    • /
    • 2017
  • Objective: The purpose of this research was to assess the agreement between job physical risk factor analysis by ergonomists using ergonomic methods and physical examinations made by occupational physicians on the presence of musculoskeletal disorders of the upper extremities. Background: Ergonomics is the systematic application of principles concerned with the design of devices and working conditions for enhancing human capabilities and optimizing working and living conditions. Proper ergonomic design is necessary to prevent injuries and physical and emotional stress. The major types of ergonomic injuries and incidents are cumulative trauma disorders (CTDs), acute strains, sprains, and system failures. Minimization of use of excessive force and awkward postures can help to prevent such injuries Method: Initial data were collected as part of a larger study by the University of Utah Ergonomics and Safety program field data collection teams and medical data collection teams from the Rocky Mountain Center for Occupational and Environmental Health (RMCOEH). Subjects included 173 male and female workers, 83 at Beehive Clothing (a clothing plant), 74 at Autoliv (a plant making air bags for vehicles), and 16 at Deseret Meat (a meat-processing plant). Posture and effort levels were analyzed using a software program developed at the University of Utah (Utah Ergonomic Analysis Tool). The Ergonomic Epicondylitis Model (EEM) was developed to assess the risk of epicondylitis from observable job physical factors. The model considers five job risk factors: (1) intensity of exertion, (2) forearm rotation, (3) wrist posture, (4) elbow compression, and (5) speed of work. Qualitative ratings of these physical factors were determined during video analysis. Personal variables were also investigated to study their relationship with epicondylitis. Logistic regression models were used to determine the association between risk factors and symptoms of epicondyle pain. Results: Results of this study indicate that gender, smoking status, and BMI do have an effect on the risk of epicondylitis but there is not a statistically significant relationship between EEM and epicondylitis. Conclusion: This research studied the relationship between an Ergonomic Epicondylitis Model (EEM) and the occurrence of epicondylitis. The model was not predictive for epicondylitis. However, it is clear that epicondylitis was associated with some individual risk factors such as smoking status, gender, and BMI. Based on the results, future research may discover risk factors that seem to increase the risk of epicondylitis. Application: Although this research used a combination of questionnaire, ergonomic job analysis, and medical job analysis to specifically verify risk factors related to epicondylitis, there are limitations. This research did not have a very large sample size because only 173 subjects were available for this study. Also, it was conducted in only 3 facilities, a plant making air bags for vehicles, a meat-processing plant, and a clothing plant in Utah. If working conditions in other kinds of facilities are considered, results may improve. Therefore, future research should perform analysis with additional subjects in different kinds of facilities. Repetition and duration of a task were not considered as risk factors in this research. These two factors could be associated with epicondylitis so it could be important to include these factors in future research. Psychosocial data and workplace conditions (e.g., low temperature) were also noted during data collection, and could be used to further study the prevalence of epicondylitis. Univariate analysis methods could be used for each variable of EEM. This research was performed using multivariate analysis. Therefore, it was difficult to recognize the different effect of each variable. Basically, the difference between univariate and multivariate analysis is that univariate analysis deals with one predictor variable at a time, whereas multivariate analysis deals with multiple predictor variables combined in a predetermined manner. The univariate analysis could show how each variable is associated with epicondyle pain. This may allow more appropriate weighting factors to be determined and therefore improve the performance of the EEM.

Biological Stream Health and Physico-chemical Characteristics in the Keum-Ho River Watershed (금호강 수계에서 생물학적 하천 건강도 및 이화학적 특성)

  • Kwon, Young-Soo;An, Kwang-Guk
    • Korean Journal of Ecology and Environment
    • /
    • v.39 no.2 s.116
    • /
    • pp.145-156
    • /
    • 2006
  • The objective of this study was to evaluate biological health conditions and physicochemical status using multi-metric models at five sites of the Keum-Ho River during August 2004 and June 2005. The research approach was based on a qualitative habitat evaluation index (QHEI), index of biological integrity (IBI) using fish assemblage, and long-term chemical data (1995 ${\sim}$ 2004), which was obtained from the Ministry of Environment, Korea. For the biological health assessments, regional model of the IBI in Korea (An,2003), was applied for this study. Mean IBI in the river was 30 and varied from 23 to 48 depending on the sampling sites. The river health was judged to be "fair condition", according to the stream health criteria of US EPA (1993) and Barbour et al. (1999). According to the analysis of the chemical water quality data of the river, BOD, COD, conductivity, TP, TN, and TSS largely varied epending on the sampling sites, seasons and years. Variabilities of some parameters including BOD, COD, TP, TN, and conductivity were greater in the downstream than in the upstream reach. This phenomenon was evident in the dilution by the rain during the monsoon. This indicates that precipitation is a very important factor of the chemical variations of water quality. Community analyses showed that species diversity index was highest (H=0.78) in the site 1, while community dominance index was highest in the site 3, where Opsariichthys uncirostris largely dominated. In contrast, the proportions of omnivore and tolerant species were greater in the downstream reach, than in the upstream reach. Overall, this study suggests that some sites in the downstream reach may need to restore the aquatic ecosystem for better biological health.

A Study on Forecasting Accuracy Improvement of Case Based Reasoning Approach Using Fuzzy Relation (퍼지 관계를 활용한 사례기반추론 예측 정확성 향상에 관한 연구)

  • Lee, In-Ho;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.67-84
    • /
    • 2010
  • In terms of business, forecasting is a work of what is expected to happen in the future to make managerial decisions and plans. Therefore, the accurate forecasting is very important for major managerial decision making and is the basis for making various strategies of business. But it is very difficult to make an unbiased and consistent estimate because of uncertainty and complexity in the future business environment. That is why we should use scientific forecasting model to support business decision making, and make an effort to minimize the model's forecasting error which is difference between observation and estimator. Nevertheless, minimizing the error is not an easy task. Case-based reasoning is a problem solving method that utilizes the past similar case to solve the current problem. To build the successful case-based reasoning models, retrieving the case not only the most similar case but also the most relevant case is very important. To retrieve the similar and relevant case from past cases, the measurement of similarities between cases is an important key factor. Especially, if the cases contain symbolic data, it is more difficult to measure the distances. The purpose of this study is to improve the forecasting accuracy of case-based reasoning approach using fuzzy relation and composition. Especially, two methods are adopted to measure the similarity between cases containing symbolic data. One is to deduct the similarity matrix following binary logic(the judgment of sameness between two symbolic data), the other is to deduct the similarity matrix following fuzzy relation and composition. This study is conducted in the following order; data gathering and preprocessing, model building and analysis, validation analysis, conclusion. First, in the progress of data gathering and preprocessing we collect data set including categorical dependent variables. Also, the data set gathered is cross-section data and independent variables of the data set include several qualitative variables expressed symbolic data. The research data consists of many financial ratios and the corresponding bond ratings of Korean companies. The ratings we employ in this study cover all bonds rated by one of the bond rating agencies in Korea. Our total sample includes 1,816 companies whose commercial papers have been rated in the period 1997~2000. Credit grades are defined as outputs and classified into 5 rating categories(A1, A2, A3, B, C) according to credit levels. Second, in the progress of model building and analysis we deduct the similarity matrix following binary logic and fuzzy composition to measure the similarity between cases containing symbolic data. In this process, the used types of fuzzy composition are max-min, max-product, max-average. And then, the analysis is carried out by case-based reasoning approach with the deducted similarity matrix. Third, in the progress of validation analysis we verify the validation of model through McNemar test based on hit ratio. Finally, we draw a conclusion from the study. As a result, the similarity measuring method using fuzzy relation and composition shows good forecasting performance compared to the similarity measuring method using binary logic for similarity measurement between two symbolic data. But the results of the analysis are not statistically significant in forecasting performance among the types of fuzzy composition. The contributions of this study are as follows. We propose another methodology that fuzzy relation and fuzzy composition could be applied for the similarity measurement between two symbolic data. That is the most important factor to build case-based reasoning model.

Macroeconomic Consequences of Pay-as-you-go Public Pension System (부과방식 공적연금의 거시경제적 영향)

  • Park, Chang-Gyun;Hur, Seok-Kyun
    • KDI Journal of Economic Policy
    • /
    • v.30 no.2
    • /
    • pp.225-270
    • /
    • 2008
  • We analyze macroeconomic consequences of pay-as-you-go (PAYGO) public pension system with a simple overlapping generations model. Contrary to large body of existing literatures offering quantitative results based on simulation study, we take another route by adopting a highly simplified framework in search of qualitatively tractable analytical results. The main contribution of our results lies in providing a sound theoretical foundation that can be utilized in interpreting various quantitative results offered by simulation studies of large scale general equilibrium models. We present a simple overlapping generations model with a defined benefit(DB) PAYGO public pension system as a benchmark case and derive an analytical equilibrium solution utilizing graphical illustration. We also discuss the modifications of the benchmark model required to encompass a defined contribution(DC) public pension system into the basic framework. Comparative statics analysis provides three important implications; First, introduction and expansion of the PAYGO public pension, DB or DC, result in lower level of capital accumulation and higher expected rate of return on the risky asset. Second, it is shown that the progress of population aging is accompanied by lower capital stock due to decrease in both demand and supply of risky asset. Moreover, risk premium for risky asset increases(decreases) as the speed of population aging accelerates(decelerates) so that the possibility of so-called "the great meltdown" of asset market cannot be excluded although the odds are not high. Third, it is most likely that the switch from DB PAYGO to DC PAYGO would result in lower capital stock and higher expected return on the risky asset mainly due to the fact that the young generation regards DC PAYGO pension as another risky asset competing against the risky asset traded in the market. This theoretical prediction coincides with one of the firmly established propositions in empirical literature that the currently dominant form of public pension system has the tendency to crowd out private capital accumulation.

  • PDF