• Title/Summary/Keyword: Multiple-Criteria Decision Making

Search Result 126, Processing Time 0.024 seconds

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Review on Quantitative Measures of Robustness for Building Structures Against Disproportionate Collapse

  • Jiang, Jian;Zhang, Qijie;Li, Liulian;Chen, Wei;Ye, Jihong;Li, Guo-Qiang
    • International Journal of High-Rise Buildings
    • /
    • v.9 no.2
    • /
    • pp.127-154
    • /
    • 2020
  • Disproportionate collapse triggered by local structural failure may cause huge casualties and economic losses, being one of the most critical civil engineering incidents. It is generally recognized that ensuring robustness of a structure, defined as its insensitivity to local failure, is the most acceptable and effective method to arrest disproportionate collapse. To date, the concept of robustness in its definition and quantification is still an issue of controversy. This paper presents a detailed review on about 50 quantitative measures of robustness for building structures, being classified into structural attribute-based and structural performance-based measures (deterministic and probabilistic). The definition of robustness is first described and distinguished from that of collapse resistance, vulnerability and redundancy. The review shows that deterministic measures predominate in quantifying structural robustness by comparing the structural responses of an intact and damaged structure. The attribute-based measures based on structural topology and stiffness are only applicable to elastic state of simple structural forms while the probabilistic measures receive growing interest by accounting for uncertainties in abnormal events, local failure, structural system and failure-induced consequences, which can be used for decision-making tools. There is still a lack of generalized quantifications of robustness, which should be derived based on the definition and design objectives and on the response of a structure to local damage as well as the associated consequences of collapse. Critical issues and recommendations for future design and research on quantification of robustness are provided from the views of column removal scenarios, types of structures, regularity of structural layouts, collapse modes, numerical methods, multiple hazards, degrees of robustness, partial damage of components, acceptable design criteria.

Self-Efficacy as a Predictor of Self-Care in Persons with Diabetes Mellitus: Meta-Analysis

  • Lee, Hyang-Yeon
    • Journal of Korean Academy of Nursing
    • /
    • v.29 no.5
    • /
    • pp.1087-1102
    • /
    • 1999
  • Diabetes mellitus, a universal and prevalent chronic disease, is projected to be one of the most formidable worldwide health problems in the 21st century. For those living with diabetes, there is a need for self-care skills to manage a complex medical regimen. Self-efficacy which refers to one's belief in his/her capability to monitor and perform the daily activities required to manage diabetes has be found to be related to self-care. The concept of self-efficacy comes from social cognitive theory which maintains that cognitive mechanism mediate the performance of behavior. The literature cites several research studies which show a strong relationship between self-efficacy and self-care behavior. Meta-analysis is a technique that enables systematic review and quantitative integration of the results from multiple primary studies that are relevant to a particular research question. Therefore, this study was done using meta-analysis to quantitatively integrate the results of independent research studies to obtain numerical estimates of the overall effect of a self-efficacy with diabetic patient on self-care behaviors. The research proceeded in three stages : 1) literature search and retrieval of studies in which self-efficacy was related to self-care, 2) coding, and 3) calculation of mean effect size and data analysis. Seventeen studies which met the research criteria included study population of adults with diabetes, measures of self-care and measures of self-efficacy as a predictive variable. Computation of effect size was done on DSTAT which is a statistical computer program specifically designed for meta-analysis. To determine the effect of self-efficacy on self-care practice homogeneity tests were conducted. Pooled effect size estimates, to determine the best subvariable for composite variables, metabolic control variables and component of self-efficacy and self-care, indicated that the effect of self-efficacy composite on self-care composite was moderate to large. The weighted mean effect size of self-efficacy composite and self-care composite were +.76 and the confidence interval was from +.66 to +.86 with the number of subjects being 1,545. The total for this meta-analysis result showed that the weighted mean effect sizes ranged from +.70 to +1.81 which indicates a large effect. But since reliabilities of the instruments in the primary studies were low or not stated, caution must be applied in unconditionally accepting the results from these effect sizes. Meta-analysis is a useful took for clarifying the status of knowledge development and guiding decision making about future research and this study confirmed that there is a relationship between self-efficacy and self-care in patients with diabetes. It, thus, provides support for nurses to promote self-efficacy in their patients. While most of the studies included in this meta-analysis used social cognitive theory as a framework for the study, some studies use Fishbein & Ajzen's attitude model as a model for active self-care. Future research is needed to more fully define the concept of self-care and to determine what it is that makes patients feel competent in their self-care activities. The results of this study showed that self-efficacy can promote self-care. Future research is needed with experimental design to determine nursing interventions that will increase self-efficacy.

  • PDF

A study of Vertical Handover between LTE and Wireless LAN Systems using Adaptive Fuzzy Logic Control and Policy based Multiple Criteria Decision Making Method (LTE/WLAN 이종망 환경에서 퍼지제어와 정책적 다기준 의사결정법을 이용한 적응적 VHO 방안 연구)

  • Lee, In-Hwan;Kim, Tae-Sub;Cho, Sung-Ho
    • The KIPS Transactions:PartC
    • /
    • v.17C no.3
    • /
    • pp.271-280
    • /
    • 2010
  • For the next generation mobile communication system, diverse wireless network techniques such as beyond 3G LTE, WiMAX/WiBro, and next generation WLAN etc. are proceeding to the form integrated into the All-IP core network. According to this development, Beyond 3G integrated into heterogeneous wireless access technologies must support the vertical handover and network to be used of several radio networks. However, unified management of each network is demanded since it is individually serviced. Therefore, in order to solve this problem this study is introducing the theory of Common Radio Resource Management (CRRM) based on Generic Link Layer (GLL). This study designs the structure and functions to support the vertical handover and propose the vertical handover algorithm of which policy-based and MCDM are composed between LTE and WLAN systems using GLL. Finally, simulation results are presented to show the improved performance over the data throughput, handover success rate, the system service cost and handover attempt number.

A Route Search of Urban Traffic Network using Fuzzy Non-Additive Control (퍼지 비가법 제어를 이용한 도시 교통망의 경로 탐색)

  • 이상훈;김성환
    • Journal of Korean Society of Transportation
    • /
    • v.21 no.1
    • /
    • pp.103-113
    • /
    • 2003
  • This paper shows alternative route search and preference route search for the traffic route search, and proposes the use of the fuzzy non-additive controller by the application of AHP(analytic hierarchy process). It is different from classical route search and notices thinking method of human. Appraisal element, weight of route is extracted from basic of the opinion gathering for the driving expert and example of route model was used for the finding of practice utility. Model evaluation was performed attribute membership function making of estimate element, estimate value setting, weight define by the AHP, non additive presentation of weight according to $\lambda$-fuzzy measure and Choquet fuzzy integral. Finally, alternative route search was possible to real time traffic route search for the well variable traffic environment, and preference route search showed reflection of traffic route search disposition for the driver individual. This paper has five important meaning. (1)The approach is similar to the driver's route selection decision process. (2)The approach is able to control of route appraisal criteria for the multiple attribute. (3)The approach makes subjective judgement objective by a non additive. (4)The approach shows dynamic route search for the alternative route search. (5)The approach is able to consider characteristics of individual drivers attributed for the preference route search.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).