• Title/Summary/Keyword: fuzzy variables

Search Result 594, Processing Time 0.197 seconds

Dynamic forecasts of bankruptcy with Recurrent Neural Network model (RNN(Recurrent Neural Network)을 이용한 기업부도예측모형에서 회계정보의 동적 변화 연구)

  • Kwon, Hyukkun;Lee, Dongkyu;Shin, Minsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.139-153
    • /
    • 2017
  • Corporate bankruptcy can cause great losses not only to stakeholders but also to many related sectors in society. Through the economic crises, bankruptcy have increased and bankruptcy prediction models have become more and more important. Therefore, corporate bankruptcy has been regarded as one of the major topics of research in business management. Also, many studies in the industry are in progress and important. Previous studies attempted to utilize various methodologies to improve the bankruptcy prediction accuracy and to resolve the overfitting problem, such as Multivariate Discriminant Analysis (MDA), Generalized Linear Model (GLM). These methods are based on statistics. Recently, researchers have used machine learning methodologies such as Support Vector Machine (SVM), Artificial Neural Network (ANN). Furthermore, fuzzy theory and genetic algorithms were used. Because of this change, many of bankruptcy models are developed. Also, performance has been improved. In general, the company's financial and accounting information will change over time. Likewise, the market situation also changes, so there are many difficulties in predicting bankruptcy only with information at a certain point in time. However, even though traditional research has problems that don't take into account the time effect, dynamic model has not been studied much. When we ignore the time effect, we get the biased results. So the static model may not be suitable for predicting bankruptcy. Thus, using the dynamic model, there is a possibility that bankruptcy prediction model is improved. In this paper, we propose RNN (Recurrent Neural Network) which is one of the deep learning methodologies. The RNN learns time series data and the performance is known to be good. Prior to experiment, we selected non-financial firms listed on the KOSPI, KOSDAQ and KONEX markets from 2010 to 2016 for the estimation of the bankruptcy prediction model and the comparison of forecasting performance. In order to prevent a mistake of predicting bankruptcy by using the financial information already reflected in the deterioration of the financial condition of the company, the financial information was collected with a lag of two years, and the default period was defined from January to December of the year. Then we defined the bankruptcy. The bankruptcy we defined is the abolition of the listing due to sluggish earnings. We confirmed abolition of the list at KIND that is corporate stock information website. Then we selected variables at previous papers. The first set of variables are Z-score variables. These variables have become traditional variables in predicting bankruptcy. The second set of variables are dynamic variable set. Finally we selected 240 normal companies and 226 bankrupt companies at the first variable set. Likewise, we selected 229 normal companies and 226 bankrupt companies at the second variable set. We created a model that reflects dynamic changes in time-series financial data and by comparing the suggested model with the analysis of existing bankruptcy predictive models, we found that the suggested model could help to improve the accuracy of bankruptcy predictions. We used financial data in KIS Value (Financial database) and selected Multivariate Discriminant Analysis (MDA), Generalized Linear Model called logistic regression (GLM), Support Vector Machine (SVM), Artificial Neural Network (ANN) model as benchmark. The result of the experiment proved that RNN's performance was better than comparative model. The accuracy of RNN was high in both sets of variables and the Area Under the Curve (AUC) value was also high. Also when we saw the hit-ratio table, the ratio of RNNs that predicted a poor company to be bankrupt was higher than that of other comparative models. However the limitation of this paper is that an overfitting problem occurs during RNN learning. But we expect to be able to solve the overfitting problem by selecting more learning data and appropriate variables. From these result, it is expected that this research will contribute to the development of a bankruptcy prediction by proposing a new dynamic model.

Study on the Determinants of Elderly Welfare Budget among 17 metropolises and provinces in Korea Using Fs/QCA (17개 시·도 노인복지예산 결정요인에 관한 연구: 퍼지셋 질적 비교분석을 중심으로)

  • Jang, Eunha;Hong, Seokho;Kim, Hunjin
    • 한국노년학
    • /
    • v.41 no.1
    • /
    • pp.127-147
    • /
    • 2021
  • It is critical to secure stable financial resources and efficient financial management for local governments to promote elderly welfare. Using the Fuzzy Set Qualitative Comparative Analysis method, we empirically examined the conditions under which the 17 metropolises and provinces in Korea increase or decrease their budget for elderly welfare. After examining previous studies, socio-economic variables(ratio of elderly people, ratio of elderly welfare recipients), financial variables(financial independence ratio), and political administrative variables (number of regulations on elderly welfare) were included in the analyses to determine the causal conditions of elderly welfare budget per person. Fs/QCA resulted in three combinations of elderly welfare budget per person: first, the combination of low ratio of elderly people, high ratio of elderly welfare recipients, and low number of regulations on elderly welfare; second, the combination of low ratio of elderly welfare recipients, low financial independence ratio, and high number of regulations on elderly welfare; and lastly, the combination of high ratio of elderly people, high ratio of elderly welfare recipients, and low financial independence ratio. Implications for elderly welfare were made considering socio- economic, financial, and political administrative circumstances based on the study results.

Applications of Fuzzy Theory on The Location Decision of Logistics Facilities (퍼지이론을 이용한 물류단지 입지 및 규모결정에 관한 연구)

  • 이승재;정창무;이헌주
    • Journal of Korean Society of Transportation
    • /
    • v.18 no.1
    • /
    • pp.75-85
    • /
    • 2000
  • In existing models in optimization, the crisp data improve has been used in the objective or constraints to derive the optimal solution, Besides, the subjective environments are eliminated because the complex and uncertain circumstances were regarded as Probable ambiguity, In other words those optimal solutions in the existing models could be the complete satisfactory solutions to the objective functions in the Process of application for industrial engineering methods to minimize risks of decision-making. As a result of those, decision-makers in location Problems couldn't face appropriately with the variation of demand as well as other variables and couldn't Provide the chance of wide selection because of the insufficient information. So under the circumstance. it has been to develop the model for the location and size decision problems of logistics facility in the use of the fuzzy theory in the intention of making the most reasonable decision in the Point of subjective view under ambiguous circumstances, in the foundation of the existing decision-making problems which must satisfy the constraints to optimize the objective function in strictly given conditions in this study. Introducing the Process used in this study after the establishment of a general mixed integer Programming(MIP) model based upon the result of existing studies to decide the location and size simultaneously, a fuzzy mixed integer Programming(FMIP) model has been developed in the use of fuzzy theory. And the general linear Programming software, LINDO 6.01 has been used to simulate, to evaluate the developed model with the examples and to judge of the appropriateness and adaptability of the model(FMIP) in the real world.

  • PDF

The Analysis and Design of Advanced Neurofuzzy Polynomial Networks (고급 뉴로퍼지 다항식 네트워크의 해석과 설계)

  • Park, Byeong-Jun;O, Seong-Gwon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.3
    • /
    • pp.18-31
    • /
    • 2002
  • In this study, we introduce a concept of advanced neurofuzzy polynomial networks(ANFPN), a hybrid modeling architecture combining neurofuzzy networks(NFN) and polynomial neural networks(PNN). These networks are highly nonlinear rule-based models. The development of the ANFPN dwells on the technologies of Computational Intelligence(Cl), namely fuzzy sets, neural networks and genetic algorithms. NFN contributes to the formation of the premise part of the rule-based structure of the ANFPN. The consequence part of the ANFPN is designed using PNN. At the premise part of the ANFPN, NFN uses both the simplified fuzzy inference and error back-propagation learning rule. The parameters of the membership functions, learning rates and momentum coefficients are adjusted with the use of genetic optimization. As the consequence structure of ANFPN, PNN is a flexible network architecture whose structure(topology) is developed through learning. In particular, the number of layers and nodes of the PNN are not fixed in advance but is generated in a dynamic way. In this study, we introduce two kinds of ANFPN architectures, namely the basic and the modified one. Here the basic and the modified architecture depend on the number of input variables and the order of polynomial in each layer of PNN structure. Owing to the specific features of two combined architectures, it is possible to consider the nonlinear characteristics of process system and to obtain the better output performance with superb predictive ability. The availability and feasibility of the ANFPN are discussed and illustrated with the aid of two representative numerical examples. The results show that the proposed ANFPN can produce the model with higher accuracy and predictive ability than any other method presented previously.

A Study on Water Level Control of PWR Steam Generator at Low Power Operation and Transient States (저출력 및 과도상태시 원전 증기발생기 수위제어에 관한 연구)

  • Na, Nan-Ju;Kwon, Kee-Choon;Bien, Zeungnam
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.3 no.2
    • /
    • pp.18-35
    • /
    • 1993
  • The water level control system of the steam generator in a pressurized water reactor and its control problems are analysed. In this work the stable control strategy during the low power operation and transient states is studied. To solve the problem, a fuzzy logic control method is applied as a basic algorithm of the controller. The control algorithm is based on the operator's knowledges and the experiences of manual operation for water level control at the compact nuclear simulator set up in Korea Atomic Energy Research Institute. From a viewpoint of the system realization, the control variables and rules are established considering simpler tuning and the input-output relation. The control strategy includes the dynamic tuning method and employs a substitutional information using the bypass valve opening instead of incorrectly measured signal at the low flow rate as the fuzzy variable of the flow rate during the pressure control mode of the steam generator. It also involves the switching algorithm between the control valves to suppress the perturbation of water level. The simulation results show that both of the fine control action at the small level error and the quick response at the large level error can be obtained and that the performance of the controller is improved.

  • PDF

Assessing the Impact of Climate Change on Water Resources: Waimea Plains, New Zealand Case Example

  • Zemansky, Gil;Hong, Yoon-Seeok Timothy;Rose, Jennifer;Song, Sung-Ho;Thomas, Joseph
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2011.05a
    • /
    • pp.18-18
    • /
    • 2011
  • Climate change is impacting and will increasingly impact both the quantity and quality of the world's water resources in a variety of ways. In some areas warming climate results in increased rainfall, surface runoff, and groundwater recharge while in others there may be declines in all of these. Water quality is described by a number of variables. Some are directly impacted by climate change. Temperature is an obvious example. Notably, increased atmospheric concentrations of $CO_2$ triggering climate change increase the $CO_2$ dissolving into water. This has manifold consequences including decreased pH and increased alkalinity, with resultant increases in dissolved concentrations of the minerals in geologic materials contacted by such water. Climate change is also expected to increase the number and intensity of extreme climate events, with related hydrologic changes. A simple framework has been developed in New Zealand for assessing and predicting climate change impacts on water resources. Assessment is largely based on trend analysis of historic data using the non-parametric Mann-Kendall method. Trend analysis requires long-term, regular monitoring data for both climate and hydrologic variables. Data quality is of primary importance and data gaps must be avoided. Quantitative prediction of climate change impacts on the quantity of water resources can be accomplished by computer modelling. This requires the serial coupling of various models. For example, regional downscaling of results from a world-wide general circulation model (GCM) can be used to forecast temperatures and precipitation for various emissions scenarios in specific catchments. Mechanistic or artificial intelligence modelling can then be used with these inputs to simulate climate change impacts over time, such as changes in streamflow, groundwater-surface water interactions, and changes in groundwater levels. The Waimea Plains catchment in New Zealand was selected for a test application of these assessment and prediction methods. This catchment is predicted to undergo relatively minor impacts due to climate change. All available climate and hydrologic databases were obtained and analyzed. These included climate (temperature, precipitation, solar radiation and sunshine hours, evapotranspiration, humidity, and cloud cover) and hydrologic (streamflow and quality and groundwater levels and quality) records. Results varied but there were indications of atmospheric temperature increasing, rainfall decreasing, streamflow decreasing, and groundwater level decreasing trends. Artificial intelligence modelling was applied to predict water usage, rainfall recharge of groundwater, and upstream flow for two regionally downscaled climate change scenarios (A1B and A2). The AI methods used were multi-layer perceptron (MLP) with extended Kalman filtering (EKF), genetic programming (GP), and a dynamic neuro-fuzzy local modelling system (DNFLMS), respectively. These were then used as inputs to a mechanistic groundwater flow-surface water interaction model (MODFLOW). A DNFLMS was also used to simulate downstream flow and groundwater levels for comparison with MODFLOW outputs. MODFLOW and DNFLMS outputs were consistent. They indicated declines in streamflow on the order of 21 to 23% for MODFLOW and DNFLMS (A1B scenario), respectively, and 27% in both cases for the A2 scenario under severe drought conditions by 2058-2059, with little if any change in groundwater levels.

  • PDF

Short-term Forecasting of Power Demand based on AREA (AREA 활용 전력수요 단기 예측)

  • Kwon, S.H.;Oh, H.S.
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.1
    • /
    • pp.25-30
    • /
    • 2016
  • It is critical to forecast the maximum daily and monthly demand for power with as little error as possible for our industry and national economy. In general, long-term forecasting of power demand has been studied from both the consumer's perspective and an econometrics model in the form of a generalized linear model with predictors. Time series techniques are used for short-term forecasting with no predictors as predictors must be predicted prior to forecasting response variables and containing estimation errors during this process is inevitable. In previous researches, seasonal exponential smoothing method, SARMA (Seasonal Auto Regressive Moving Average) with consideration to weekly pattern Neuron-Fuzzy model, SVR (Support Vector Regression) model with predictors explored through machine learning, and K-means clustering technique in the various approaches have been applied to short-term power supply forecasting. In this paper, SARMA and intervention model are fitted to forecast the maximum power load daily, weekly, and monthly by using the empirical data from 2011 through 2013. $ARMA(2,\;1,\;2)(1,\;1,\;1)_7$ and $ARMA(0,\;1,\;1)(1,\;1,\;0)_{12}$ are fitted respectively to the daily and monthly power demand, but the weekly power demand is not fitted by AREA because of unit root series. In our fitted intervention model, the factors of long holidays, summer and winter are significant in the form of indicator function. The SARMA with MAPE (Mean Absolute Percentage Error) of 2.45% and intervention model with MAPE of 2.44% are more efficient than the present seasonal exponential smoothing with MAPE of about 4%. Although the dynamic repression model with the predictors of humidity, temperature, and seasonal dummies was applied to foretaste the daily power demand, it lead to a high MAPE of 3.5% even though it has estimation error of predictors.

Evaluation of the parameters affecting the Schmidt rebound hammer reading using ANFIS method

  • Toghroli, Ali;Darvishmoghaddam, Ehsan;Zandi, Yousef;Parvan, Mahdi;Safa, Maryam;Abdullahi, Muazu Mohammed;Heydari, Abbas;Wakil, Karzan;Gebreel, Saad A.M.;Khorami, Majid
    • Computers and Concrete
    • /
    • v.21 no.5
    • /
    • pp.525-530
    • /
    • 2018
  • As a nondestructive testing method, the Schmidt rebound hammer is widely used for structural health monitoring. During application, a Schmidt hammer hits the surface of a concrete mass. According to the principle of rebound, concrete strength depends on the hardness of the concrete energy surface. Study aims to identify the main variables affecting the results of Schmidt rebound hammer reading and consequently the results of structural health monitoring of concrete structures using adaptive neuro-fuzzy inference system (ANFIS). The ANFIS process for variable selection was applied for this purpose. This procedure comprises some methods that determine a subsection of the entire set of detailed factors, which present analytical capability. ANFIS was applied to complete a flexible search. Afterward, this method was applied to conclude how the five main factors (namely, age, silica fume, fine aggregate, coarse aggregate, and water) used in designing concrete mixture influence the Schmidt rebound hammer reading and consequently the structural health monitoring accuracy. Results show that water is considered the most significant parameter of the Schmidt rebound hammer reading. The details of this study are discussed thoroughly.

Defect Severity-based Ensemble Model using FCM (FCM을 적용한 결함심각도 기반 앙상블 모델)

  • Lee, Na-Young;Kwon, Ki-Tae
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.12
    • /
    • pp.681-686
    • /
    • 2016
  • Software defect prediction is an important factor in efficient project management and success. The severity of the defect usually determines the degree to which the project is affected. However, existing studies focus only on the presence or absence of a defect and not the severity of defect. In this study, we proposed an ensemble model using FCM based on defect severity. The severity of the defect of NASA data set's PC4 was reclassified. To select the input column that affected the severity of the defect, we extracted the important defect factor of the data set using Random Forest (RF). We evaluated the performance of the model by changing the parameters in the 10-fold cross-validation. The evaluation results were as follows. First, defect severities were reclassified from 58, 40, 80 to 30, 20, 128. Second, BRANCH_COUNT was an important input column for the degree of severity in terms of accuracy and node impurities. Third, smaller tree number led to more variables for good performance.

Power peaking factor prediction using ANFIS method

  • Ali, Nur Syazwani Mohd;Hamzah, Khaidzir;Idris, Faridah;Basri, Nor Afifah;Sarkawi, Muhammad Syahir;Sazali, Muhammad Arif;Rabir, Hairie;Minhat, Mohamad Sabri;Zainal, Jasman
    • Nuclear Engineering and Technology
    • /
    • v.54 no.2
    • /
    • pp.608-616
    • /
    • 2022
  • Power peaking factors (PPF) is an important parameter for safe and efficient reactor operation. There are several methods to calculate the PPF at TRIGA research reactors such as MCNP and TRIGLAV codes. However, these methods are time-consuming and required high specifications of a computer system. To overcome these limitations, artificial intelligence was introduced for parameter prediction. Previous studies applied the neural network method to predict the PPF, but the publications using the ANFIS method are not well developed yet. In this paper, the prediction of PPF using the ANFIS was conducted. Two input variables, control rod position, and neutron flux were collected while the PPF was calculated using TRIGLAV code as the data output. These input-output datasets were used for ANFIS model generation, training, and testing. In this study, four ANFIS model with two types of input space partitioning methods shows good predictive performances with R2 values in the range of 96%-97%, reveals the strong relationship between the predicted and actual PPF values. The RMSE calculated also near zero. From this statistical analysis, it is proven that the ANFIS could predict the PPF accurately and can be used as an alternative method to develop a real-time monitoring system at TRIGA research reactors.