• Title/Summary/Keyword: Output Variable

Search Result 1,176, Processing Time 0.025 seconds

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Study on the Short-Term Hemodynamic Effects of Experimental Cardiomyoplasty in Heart Failure Model (심부전 모델에서 실험적 심근성형술의 단기 혈역학적 효과에 관한 연구)

  • Jeong, Yoon-Seop;Youm, Wook;Lee, Chang-Ha;Kim, Wook-Seong;Lee, Young-Tak;Kim, Won-Gon
    • Journal of Chest Surgery
    • /
    • v.32 no.3
    • /
    • pp.224-236
    • /
    • 1999
  • Background: To evaluate the short-term effect of dynamic cardiomyoplasty on circulatory function and detect the related factors that can affect it, experimental cardiomyoplasties were performed under the state of normal cardiac function and heart failure. Material and Method: A total of 10 mongrel dogs weighing 20 to 30kg were divided arbitrarily into two groups. Five dogs of group A underwent cardiomyoplasty with latissimus dorsi(LD) muscle mobilization followed by a 2-week vascular delay and 6-week muscle training. Then, hemodynamic studies were conducted. In group B, doxorubicin was given to 5 dogs in an IV dose of 1 mg/kg once a week for 8 weeks to induce chronic heart failure, and simultaneous muscle training was given for preconditioning during this period. Then, cardiomyoplasties were performed and hemodynamic studies were conducted immediately after these cardiomyoplasties in group B. Result: In group A, under the state of normal cardiac function, only mean right atrial pressure significantly increased with the pacer-on(p<0.05) and the left ventricular hemodynamic parameters did not change significantly. However, with pacer-on in group B, cardiac output(CO), rate of left ventricular pressure development(dp/dt), stroke volume(SV), and left ventricular stroke work(SW) increased by 16.7${\pm}$7.2%, 9.3${\pm}$3.2%, 16.8${\pm}$8.6%, and 23.1${\pm}$9.7%, respectively, whereas left ventricular end-diastole pressure(LVEDP) and mean pulmonary capillary wedge pressure(mPCWP) decreased by 32.1${\pm}$4.6% and 17.7${\pm}$9.1%, respectively(p<0.05). In group A, imipramine was infused at the rate of 7.5mg/kg/hour for 34${\pm}$2.6 minutes to induce acute heart failure, which resulted in the reduction of cardiac output by 17.5${\pm}$2.7%, systolic left ventricular pressure by 15.8${\pm}$2.5% and the elevation of left ventricular end-diastole pressure by 54.3${\pm}$15.2%(p<0.05). With pacer-on under this state of acute heart failu e, CO, dp/dt, SV, and SW increased by 4.5${\pm}$1.8% and 3.1${\pm}$1.1%, 5.7${\pm}$3.6%, and 6.9${\pm}$4.4%, respectively, whereas LVEDP decreased by 11.7${\pm}$4.7%(p<0.05). Comparing CO, dp/dt, SV, SW and LVEDP that changed significantly with pacer-on, both under the state of acute and chronic heart failure, augmentation widths of these left ventricular hemodynamic parameters were significantly larger under the state of chronic heart failure(group B) than acute heart failure(group A)(p<0.05). On gross inspection, variable degrees of adhesion and inflammation were present in all 5 dogs of group A, including 2 dogs that showed no muscle contraction. No adhesion and inflammation were, however, present in all 5 dogs of group B, which showed vivid muscle contractions. Considering these differences in gross findings along with the following premise that the acute heart failure state was not statistically different from the chronic one in terms of left ventricular parameters(p>0.05), the larger augmentation effect seen in group B is presumed to be mainly attributed to the viability and contractility of the LD muscle. Conclusion: These results indicate that the positive circulatory augmentation effect of cardiomyoplasty is apparent only under the state of heart failure and the preservation of muscle contractility is important to maximize this effect.

  • PDF

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

A Joint Application of DRASTIC and Numerical Groundwater Flow Model for The Assessment of Groundwater Vulnerability of Buyeo-Eup Area (DRASTIC 모델 및 지하수 수치모사 연계 적용에 의한 부여읍 일대의 지하수 오염 취약성 평가)

  • Lee, Hyun-Ju;Park, Eun-Gyu;Kim, Kang-Joo;Park, Ki-Hoon
    • Journal of Soil and Groundwater Environment
    • /
    • v.13 no.1
    • /
    • pp.77-91
    • /
    • 2008
  • In this study, we developed a technique of applying DRASTIC, which is the most widely used tool for estimation of groundwater vulnerability to the aqueous phase contaminant infiltrated from the surface, and a groundwater flow model jointly to assess groundwater contamination potential. The developed technique is then applied to Buyeo-eup area in Buyeo-gun, Chungcheongnam-do, Korea. The input thematic data of a depth to water required in DRASTIC model is known to be the most sensitive to the output while only a few observations at a few time schedules are generally available. To overcome this practical shortcoming, both steady-state and transient groundwater level distributions are simulated using a finite difference numerical model, MODFLOW. In the application for the assessment of groundwater vulnerability, it is found that the vulnerability results from the numerical simulation of a groundwater level is much more practical compared to cokriging methods. Those advantages are, first, the results from the simulation enable a practitioner to see the temporally comprehensive vulnerabilities. The second merit of the technique is that the method considers wide variety of engaging data such as field-observed hydrogeologic parameters as well as geographic relief. The depth to water generated through geostatistical methods in the conventional method is unable to incorporate temporally variable data, that is, the seasonal variation of a recharge rate. As a result, we found that the vulnerability out of both the geostatistical method and the steady-state groundwater flow simulation are in similar patterns. By applying the transient simulation results to DRASTIC model, we also found that the vulnerability shows sharp seasonal variation due to the change of groundwater recharge. The change of the vulnerability is found to be most peculiar during summer with the highest recharge rate and winter with the lowest. Our research indicates that numerical modeling can be a useful tool for temporal as well as spatial interpolation of the depth to water when the number of the observed data is inadequate for the vulnerability assessments through the conventional techniques.

A Study on Relationships Between Environment, Organizational Structure, and Organizational Effectiveness of Public Health Centers in Korea (보건소의 환경, 조직구조와 조직유효성과의 관계)

  • Yun, Soon-Nyoung
    • Research in Community and Public Health Nursing
    • /
    • v.6 no.1
    • /
    • pp.5-33
    • /
    • 1995
  • The objective of the study are two-fold: one is to explore the relationship between environment, organizational structure, and organizational effectiveness of public health centers in Korea, and the other is to examine the validity of contingency theory for improving the organizational structure of public health care agencies, with special emphasis on public health nursing administration. Accordingly, the conceptual model of the study consisted of three different concepts: environment, organizational structure, and organizational effectiveness, which were built up from the contingency theory. Data were collected during the period from 1st of May through 30th of June, 1990. From the total of 249 health centers in the country, one hundred and five centers were sampled non proportionally, according to the geopolitical distribution. Out of 105, 73 health centers responded to mailed questionnaire. The health centers were the unit of the study, and a various statistical analysis techniques were used: Reliability analysis(Cronbach's Alpha) for 4 measurement tools; Shapiro-Wilk statistic for normality test of measured scores of 6 variables: ANOVA, Pearson Correlaion analysis, regressional analysis, and canonical correlation analysis for the test of the relationships and differences between the variables. The results were. as follows : 1. No significant differences between forma lization, decision-making authority and environmental complexity were found(F=1.383, P=.24 ; F=.801, P=.37). 2. Negative relationships between formalization and decision-making authority for both urban and rural health centers were found(r=-.470, P=.002 ; r=-.348, P=.46). 3. No significant relationship between formalization and job satisfaction for both urban and rural health centers were found (r=-.242, P=.132, r=-.060, P=.739). 4. Significant positive relationship between decision - making authority and job satisfaction were found in urban health centers (r=.504, P=.0009), but no such relationship was observed in rural health centers. Regression coefficient between them was statistically significant($\beta=1.535$, P=.0002), and accuracy of regression line was accepted (W=.975, P= .420). 5. No significant relationships among formalization and family planning services, maternal health services, and tuberculosis control services for both urban and rural health centers were found. 6. Among decision-making authority and family planning services, maternal health services, and tuberculosis control services, significant positive relationship was found between de cision-making authority and family planning services(r=.286, P=.73). 7. A significant difference was found in maternal health services by the type of health centers (F=5.13, P=.026) but no difference was found in tuberculosis control services by the type of health centers, formalization, and decision-making authority. 8. A significant positive relationships were found between family planning services and maternal health services and tuberculosis control services, and between maternal health services and tuberculosis control services (r=-.499, P=.001 ; r=.457, P=.004 ; r=.495, P=.002) in case of urban health centers. In case of rural health centers, relationships between family planning services and tuberculosis control services, and between maternal health services and tuberculosis control services were statistically significant (r=.534, P=.002 ; r=.389, P=.027). No significant relationship was found between family planning and maternal health services. 9. A significant positive canonical correlation was found between the group of independent variables consisted of formalization and de cision-making authority and the group of dependent variables consisted of family planning services, maternal health services and tuberculosis control services(Rc=.455, P=.02). In case of urban health centers, no significant canonical correlation was found between them, but significant canoncial correlation was found in rural health centers(Rc=.578, P=.069), 10. Relationships between job satisfaction and health care productivity was not found significant. Through these results, the assumed relationship between environment and organizational structure was not supported in health centers. Therefore, the relationship between the organizational effectiveness and the congruence between environment and organizational structure that contingency theory proposes to exist was not able to be tested. However decision-making authority was found as an important variable of organizational structure affecting family planning services and job satisfaction in urban health centers. Thus it was suggested that decentralized decision making among health professionals would be a valuable strategy for improvement of organizational effectiveness in public health centers. It is also recommended that further studies to test contingency theory would use variability and uncertainty to define environment of public health centers instead of complexity.

  • PDF

A Scalable and Modular Approach to Understanding of Real-time Software: An Architecture-based Software Understanding(ARSU) and the Software Re/reverse-engineering Environment(SRE) (실시간 소프트웨어의 조절적${\cdot}$단위적 이해 방법 : ARSU(Architecture-based Software Understanding)와 SRE(Software Re/reverse-engineering Environment))

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3159-3174
    • /
    • 1997
  • This paper reports a research to develop a methodology and a tool for understanding of very large and complex real-time software. The methodology and the tool mostly developed by the author are called the Architecture-based Real-time Software Understanding (ARSU) and the Software Re/reverse-engineering Environment (SRE) respectively. Due to size and complexity, it is commonly very hard to understand the software during reengineering process. However the research facilitates scalable re/reverse-engineering of such real-time software based on the architecture of the software in three-dimensional perspectives: structural, functional, and behavioral views. Firstly, the structural view reveals the overall architecture, specification (outline), and the algorithm (detail) views of the software, based on hierarchically organized parent-chi1d relationship. The basic building block of the architecture is a software Unit (SWU), generated by user-defined criteria. The architecture facilitates navigation of the software in top-down or bottom-up way. It captures the specification and algorithm views at different levels of abstraction. It also shows the functional and the behavioral information at these levels. Secondly, the functional view includes graphs of data/control flow, input/output, definition/use, variable/reference, etc. Each feature of the view contains different kind of functionality of the software. Thirdly, the behavioral view includes state diagrams, interleaved event lists, etc. This view shows the dynamic properties or the software at runtime. Beside these views, there are a number of other documents: capabilities, interfaces, comments, code, etc. One of the most powerful characteristics of this approach is the capability of abstracting and exploding these dimensional information in the architecture through navigation. These capabilities establish the foundation for scalable and modular understanding of the software. This approach allows engineers to extract reusable components from the software during reengineering process.

  • PDF

A Study on the Age Distribution Factors of One Person Household in Seoul using Multiple Regression Analysis (다중회귀분석을 이용한 서울시 1인 가구의 연령별 분포요인에 관한 연구)

  • Lee, SunHee;Yoon, DongHyeun;Koh, JuneHwan
    • Spatial Information Research
    • /
    • v.23 no.3
    • /
    • pp.11-21
    • /
    • 2015
  • While the number of total population in Seoul has been on the constant decline for the last few years, the number of household has increased due to the rising tendency of the smaller households. In 2010, the small households in the metropolitan areas accounted for 44% of the entire households, and Statistics Korea has reported that one person household, which will take up more than 30% of the whole household, will have been the most common type of household by 2020. This reason of rise will be differently shown according to age like the preferred housing type or surrounding environments, this research is suggest to research hypothesis that distinction of age leads to the spatial distribution of one person household. Therefore, this research is to exercise a multiple regression analysis targeting on the facilities, which become the spatial distribution factor of one person household, with the independent variable gained from the concluded area calculated with the area ratio of the spatial unit followed by the service area analysis based on network. The spatial unit is the census output of Seoul, and based on this the interaction between the number of one person household according to age and the factors of its distribution. Also, the spatial regions - downtown, northeast, southeast, northwest, southwest - are designed as dummy variables and the results of each region are found out. As a result, the spatial regions occupied according to age are found to be varied - people in their 20s prefer housings near the college, 30s lease or the monthly rental housings, 40s the monthly rental housings, and over 60s the housing with the floor area of less than $40m^2$. Likewise, one person household has different types of housing environments preferred according to age, and thus a housing policy concerning this will have to be suggested.

A Case Study on the Exogenous Factors affecting Extra-large Egg Production in a Layer Farm in Korea (산란계 사육농장 특란 생산에 미치는 외부 요인 분석을 위한 사례 연구)

  • Lee, Hyun-Chang;Jang, Woo-Whan
    • Korean Journal of Poultry Science
    • /
    • v.41 no.2
    • /
    • pp.99-104
    • /
    • 2014
  • The objective of this study is to analyze the production of extra-large egg and assess the impacts of exogenous factors in feeding the layer chicken. The main results of this study are as follows; First, feeding rations on the basics of statistics, internal maximum and minimum temperature and, the age at first egg affect the production of extra-large egg. Second, implicating the standardized coefficients from the conclusion of regression model estimating suggest that the amount of feed has the greatest impact on production followed by the age at first egg. Third, by using the elasticity of output and the volatility in the production, the result suggest that among the independent variable factors in the external volatility, the biggest one goes to feed ration, and the age at first egg follows. In order to control the production volatility in the extra-large egg production of the farms, it is necessary to manage an efficient feeding based on feed ration, age at first egg and, the maximum and minimum temperature inside the farm. Taken together, the results demonstrates that it should be concentrated by controlling the exogenous factors affecting extra large egg production and the management system construct, to increase extra-large egg production and the income of farmers at the same time.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Evaluation of Methane Generation Rate Constant(k) by Estimating Greenhouse Gas Emission in Small Scale Landfill (소규모 매립지에 대한 메탄발생속도상수(k) 산출 및 온실가스 발생량 평가)

  • Lee, Wonjae;Kang, Byungwook;Cho, Byungyeol;Lee, Sangwoo;Yeon, Ikjun
    • Journal of the Korean GEO-environmental Society
    • /
    • v.15 no.5
    • /
    • pp.5-11
    • /
    • 2014
  • In this study, greenhouse gas emission for small scale landfill (H and Y landfill) was investigated to deduce special the methane generation rate constant(k). To achieve the purpose, the data of physical composition was collected and amount of LFG emission was calculated by using FOD method suggested in 2006 IPCC GL. Also, amount of LFG emission was directly measured in the active landfill sites. By comparing the results, the methane generation rate constant(k), which was used as input variable in FOD method suggested in 2006 IPCC GL, was deduced. From the results on the physical composition, it was shown that the ranges of DOC per year in H (1997~2011) and Y (1994~2011) landfill sites were 13.16 %~23.79 % ($16.52{\pm}3.84%$) and 7.24 %~34.67 % ($14.56{\pm}7.30%$), respectively. The DOC results showed the differences with the suggested values (= 18 %) in 2006 IPCC GL. The average values of methane generation rate constant(k) from each landfill site were $0.0413yr^{-1}$ and $0.0117yr^{-1}$. The results of methane generation rate constant(k) was shown big difference with 2006 IPCC GL defualt value (k = 0.09). It was confirmed that calculation results of greenhouse gas emission using default value in 2006 IPCC GL show excessive output.