• Title/Summary/Keyword: Ordinal Data

Search Result 118, Processing Time 0.026 seconds

Integrated AHP and DEA method for technology evaluation and selection: application to clean technology (기술 평가 및 선정을 위한 AHP와 DEA 통합 활용 방법: 청정기술에의 적용)

  • Yu, Peng;Lee, Jang Hee
    • Knowledge Management Research
    • /
    • v.13 no.3
    • /
    • pp.55-77
    • /
    • 2012
  • Selecting promising technology is becoming more and more difficult due to the increased number and complexity. In this study, we propose hybrid AHP/DEA-AR method and hybrid AHP/DEA-AR-G method to evaluate efficiency of technology alternatives based on ordinal rating data collected through survey to technology experts in a certain field and select efficient technology alternative as promising technology. The proposed method normalizes rating data and uses AHP to derive weights to improve the credibility of analysis, then in order to avoid basic DEA models' problems, use DEA-AR and DEA-AR-G to evaluate efficiency of technology alternatives. In this study, we applied the proposed methods to clean technology and compared with the basic DEA models. According to the result of the comparison, we can find that the both proposed methods are excellent in confirming most efficient technology, and hybrid AHP/DEA-AR method is much easier to use in the process of technology selection.

  • PDF

A generalized logit model with mixed effects for categorical data (다가자료에 대한 혼합효과모형)

  • 최재성
    • The Korean Journal of Applied Statistics
    • /
    • v.15 no.1
    • /
    • pp.129-137
    • /
    • 2002
  • This paper suggests a generalized logit model with mixed effects for analysing frequency data in multi-contingency table. In this model nominal response variable is assumed to be polychotomous. When some factors are fixed but considered as ordinal and others are random, this paper shows how to use baseline-category logits to incoporate the mixed-effects of those factors into the model. A numerical algorithm was used to estimate model parameters by using marginal log-likelihood.

A generalized logit model with mixed effects for categorical data (다가자료에 대한 혼합효과모형)

  • Choi, Jae-Sung
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.25-33
    • /
    • 2001
  • This paper suggests a generalized logit model with mixed effects for analysing frequency data in multi-contingency table. In this model nominal response variable is assumed to be polychotomous. When some factors are fixed but condisered as ordinal and others are random, this paper shows how to use baseline-category logits to incoporate the mixed-effects of those factors into the model. A numerical algorithm was used to estimate model parameters by using marginal log-likelihood.

  • PDF

Bayesian Analysis of Korean Alcohol Consumption Data Using a Zero-Inflated Ordered Probit Model (영 과잉 순서적 프로빗 모형을 이용한 한국인의 음주자료에 대한 베이지안 분석)

  • Oh, Man-Suk;Oh, Hyun-Tak;Park, Se-Mi
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.2
    • /
    • pp.363-376
    • /
    • 2012
  • Excessive zeroes are often observed in ordinal categorical response variables. An ordinary ordered Probit model is not appropriate for zero-inflated data especially when there are many different sources of generating 0 observations. In this paper, we apply a two-stage zero-inflated ordered Probit (ZIOP) model which incorporate the zero-flated nature of data, propose a Bayesian analysis of a ZIOP model, and apply the method to alcohol consumption data collected by the National Bureau of Statistics, Korea. In the first stage of a ZIOP model, a Probit model is introduced to divide the non-drinkers into genuine non-drinkers who do not participate in drinking due to personal beliefs or permanent health problems and potential drinkers who did not drink at the time of the survey but have the potential to become drinkers. In the second stage, an ordered probit model is applied to drinkers that consists of zero-consumption potential drinkers and positive consumption drinkers. The analysis results show that about 30% of non-drinkers are genuine non-drinkers and hence the Korean alcohol consumption data has the feature of zero-inflated data. A study on the marginal effect of each explanatory variable shows that certain explanatory variables have effects on the genuine non-drinkers and potential drinkers in opposite directions, which may not be detected by an ordered Probit model.

Effect of Liquidity, Profitability, Leverage, and Firm Size on Dividend Policy

  • PATTIRUHU, Jozef R.;PAAIS, Maartje
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.7 no.10
    • /
    • pp.35-42
    • /
    • 2020
  • This study aims to investigate the relationship between the variables of Current Ratio (CR), Return-on-Equity (ROE), Return-on-Assets (ROA), Debt-to-Equity Ratio (DER), and Firm Size (FS) on Dividend Policy (DP) in real estate and property companies listed on the Indonesia Stock Exchange in the period 2016-2019, looking at nine real estate companies in Indonesia. The research methodology uses an explanatory analysis approach and linear regression. Based on the eligibility and homogeneity of the data, the number of sample companies selected was nine companies. The company's financial statement data derived from primary data obtained on the Indonesia Stock Exchange, such as current ratio (CR), return-on-equity (ROE), return-on-assets (ROA), debt-to-equity ratio (DER) and firm size and dividend policy variables. The data analysis procedure is first to transform financial data from the original ratio data into interval data and, then, transform it to ordinal data. Furthermore, the validity and reliability process are ignored because the data is primary. Finally, regression testing is part of the hypothesis testing stage. The results of this study showed that the CR, ROE, and firm size had no positive and significant effect on dividend policy. In contrast, DER and ROA have a positive and significant impact on dividend policy.

A Simplified Model of the CIA based on Scaling Theory (척도이론에 근거한 CIA의 간편화 모형)

  • Jeon, Jeong-Cheol;Im, Dong-Jun;An, Gi-Hyeon;Gwon, Cheol-Sin
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2008.10a
    • /
    • pp.444-447
    • /
    • 2008
  • This study is intended to develop a improved version of Cross Impact Analysis Model based on Scaling Theory. In developing the model, we applied the scale transformation technique and regression technique to existing CIA model. Improved CIA model is composed of two sub-models: 'model for impact value measurement,' and 'model for impact value conversion'. We applied a technique which measures data by ordinal scale and then transforms them into interval scale and ratio scale data to CIA model. The accuracy of forecasting and the usability of CIA application have been improved.

  • PDF

Bayesian inference of the cumulative logistic principal component regression models

  • Kyung, Minjung
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.2
    • /
    • pp.203-223
    • /
    • 2022
  • We propose a Bayesian approach to cumulative logistic regression model for the ordinal response based on the orthogonal principal components via singular value decomposition considering the multicollinearity among predictors. The advantage of the suggested method is considering dimension reduction and parameter estimation simultaneously. To evaluate the performance of the proposed model we conduct a simulation study with considering a high-dimensional and highly correlated explanatory matrix. Also, we fit the suggested method to a real data concerning sprout- and scab-damaged kernels of wheat and compare it to EM based proportional-odds logistic regression model. Compared to EM based methods, we argue that the proposed model works better for the highly correlated high-dimensional data with providing parameter estimates and provides good predictions.

Imprecise DEA Efficiency Assessments : Characterizations and Methods

  • Park, Kyung-Sam
    • Management Science and Financial Engineering
    • /
    • v.14 no.2
    • /
    • pp.67-87
    • /
    • 2008
  • Data envelopment analysis (DEA) has proven to be a useful tool for assessing efficiency or productivity of organizations which is of vital practical importance in managerial decision making. While DEA assumes exact input and output data, the development of imprecise DEA (IDEA) broadens the scope of applications to efficiency evaluations involving imprecise information which implies various forms of ordinal and bounded data possibly or often occurring in practice. The primary purpose of this article is to characterize the variable efficiency in IDEA. Since DEA describes a pair of primal and dual models, also called envelopment and multiplier models, we can basically consider two IDEA models: One incorporates imprecise data into envelopment model and the other includes the same imprecise data in multiplier model. The issues of rising importance are thus the relationships between the two models and how to solve them. The groundwork we will make includes a duality study which makes it possible to characterize the efficiency solutions from the two models. This also relates to why we take into account the variable efficiency and its bounds in IDEA that some of the published IDEA studies have made. We also present computational aspects of the efficiency bounds and how to interpret the efficiency solutions.

Analyzing empirical performance of correlation based feature selection with company credit rank score dataset - Emphasis on KOSPI manufacturing companies -

  • Nam, Youn Chang;Lee, Kun Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.4
    • /
    • pp.63-71
    • /
    • 2016
  • This paper is about applying efficient data mining method which improves the score calculation and proper building performance of credit ranking score system. The main idea of this data mining technique is accomplishing such objectives by applying Correlation based Feature Selection which could also be used to verify the properness of existing rank scores quickly. This study selected 2047 manufacturing companies on KOSPI market during the period of 2009 to 2013, which have their own credit rank scores given by NICE information service agency. Regarding the relevant financial variables, total 80 variables were collected from KIS-Value and DART (Data Analysis, Retrieval and Transfer System). If correlation based feature selection could select more important variables, then required information and cost would be reduced significantly. Through analysis, this study show that the proposed correlation based feature selection method improves selection and classification process of credit rank system so that the accuracy and credibility would be increased while the cost for building system would be decreased.

Bayesian analysis of cumulative logit models using the Monte Carlo Gibbs sampling (몬테칼로깁스표본기법을 이용한 누적로짓 모형의 베이지안 분석)

  • 오만숙
    • The Korean Journal of Applied Statistics
    • /
    • v.10 no.1
    • /
    • pp.151-161
    • /
    • 1997
  • An easy Monte Carlo Gibbs sampling approach is suggested for Bayesian analysis of cumulative logit models for ordinal polytomous data. Because in the cumulative logit model the posterior conditional distributions of parameters are not given in convenient forms for random sample generation, appropriate latent variables are introduced into the model so that in the new model all the conditional distributions are given in very convenient forms for implementation of the Gibbs sampler.

  • PDF