• Title/Summary/Keyword: neural network ensemble

Search Result 128, Processing Time 0.027 seconds

Analysis for Flood Quantile Estimates at Ungauged Sites in Arid and Semi-arid Regions Based on Regional Frequency Analysis (지역빈도해석을 통한 건조지역의 미계측 지점 확률홍수량 추정을 위한 연구)

  • Jung, Kichul;Kang, Boosik
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2017.05a
    • /
    • pp.51-51
    • /
    • 2017
  • 지역빈도해석은 짧은 기간의 자료를 보유하고 있는 계측 지점이나 자료가 없는 미계측 지점에서의 확률수문량을 산정하기 위하여 많이 쓰여 진다. 지역빈도해석을 실시하기 위한 조건으로는 우선 수집된 하천유역들을 대상으로 수문학적 동질 지역을 구분하는 것이 중요하다. 그리고 구분되어진 지역에 포함되는 모든 지점들의 자료를 빈도해석 함으로써 관심 지점의 신뢰할 만한 확률수문량을 산정하는 것이다. 그동안의 지역빈도해석은 주로 비건조지역을 중심으로 홍수와 같은 재난재해 대비 그리고 수자원 관리를 위한 연구들을 실시해왔다. 본 연구의 주 목적은 건조지역의 수자원 관리를 위해 건조지역 하천유역을 중심으로 지역빈도해석을 실시하여 신뢰할만한 확률수문량을 산정하는 것이다. 확률수문량 산정값의 정확도를 향상시키기 위해 지역빈도해석 모델에 쓰여 지는 새로운 지형학적 변수들을 제공하였고 수문학적 동질 지역을 구분 위해 수집된 각 하천유역의 형상들을 확인하여 동질 지역을 정의하였다. 예를 들면, 수지형 유역, 부채형 유역, 격자형 유역과 같은 다른 형상들을 구분하여 각 유역 형상 종류별로 동질 지역을 만들었다. 건조지역의 지역빈도해석을 위해 미국 건조지역의 105개 하천유역 유량자료들을 수집 및 이용하였다. 확률수문량 산정을 위하여 앙상블 인경신경망 (Ensemble Artificial Neural Network)과 정준 상관 계수(Canonical Correlation Analysis)를 이용한 지역빈도해석 모델을 만들었다. 제안된 모델의 수행평가와 정확성 평가를 위해 리샘플링 기법인 10-겹 교차 검증 (10-fold cross-validation), 잭나이프 (Jackknife) 기법들을 이용하였고 모델로부터 산정된 확률수문량값을 편향 (Bias), 상대 편향(rBias), 평균 제곱근 오차 (RMSE), 상대 평균 제곱근 오차 (rRMSE)를 통하여 산정 값과 실제 관측 값의 차이를 분석하였다. 그 결과 건조지역의 지역빈도해석을 위해 새롭게 제시된 지형학적 변수들을 사용하였을 때 모델의 수행능력이 향상되었음을 확인하였다. 또한 하천유역 형상에 따라 동질 지역을 구분하였을 때 향상된 확률수문량이 산정되었다. 향상된 지역빈도해석 모델을 통해 건조지역의 신뢰할만한 확률수문량을 산정함으로써 건조지역의 효과적인 수자원 관리를 위한 수공시설물 설계에 중요한 정보들을 제공할 것이다.

  • PDF

Damaged cable detection with statistical analysis, clustering, and deep learning models

  • Son, Hyesook;Yoon, Chanyoung;Kim, Yejin;Jang, Yun;Tran, Linh Viet;Kim, Seung-Eock;Kim, Dong Joo;Park, Jongwoong
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.17-28
    • /
    • 2022
  • The cable component of cable-stayed bridges is gradually impacted by weather conditions, vehicle loads, and material corrosion. The stayed cable is a critical load-carrying part that closely affects the operational stability of a cable-stayed bridge. Damaged cables might lead to the bridge collapse due to their tension capacity reduction. Thus, it is necessary to develop structural health monitoring (SHM) techniques that accurately identify damaged cables. In this work, a combinational identification method of three efficient techniques, including statistical analysis, clustering, and neural network models, is proposed to detect the damaged cable in a cable-stayed bridge. The measured dataset from the bridge was initially preprocessed to remove the outlier channels. Then, the theory and application of each technique for damage detection were introduced. In general, the statistical approach extracts the parameters representing the damage within time series, and the clustering approach identifies the outliers from the data signals as damaged members, while the deep learning approach uses the nonlinear data dependencies in SHM for the training model. The performance of these approaches in classifying the damaged cable was assessed, and the combinational identification method was obtained using the voting ensemble. Finally, the combination method was compared with an existing outlier detection algorithm, support vector machines (SVM). The results demonstrate that the proposed method is robust and provides higher accuracy for the damaged cable detection in the cable-stayed bridge.

Assessment of compressive strength of high-performance concrete using soft computing approaches

  • Chukwuemeka Daniel;Jitendra Khatti;Kamaldeep Singh Grover
    • Computers and Concrete
    • /
    • v.33 no.1
    • /
    • pp.55-75
    • /
    • 2024
  • The present study introduces an optimum performance soft computing model for predicting the compressive strength of high-performance concrete (HPC) by comparing models based on conventional (kernel-based, covariance function-based, and tree-based), advanced machine (least square support vector machine-LSSVM and minimax probability machine regressor-MPMR), and deep (artificial neural network-ANN) learning approaches using a common database for the first time. A compressive strength database, having results of 1030 concrete samples, has been compiled from the literature and preprocessed. For the purpose of training, testing, and validation of soft computing models, 803, 101, and 101 data points have been selected arbitrarily from preprocessed data points, i.e., 1005. Thirteen performance metrics, including three new metrics, i.e., a20-index, index of agreement, and index of scatter, have been implemented for each model. The performance comparison reveals that the SVM (kernel-based), ET (tree-based), MPMR (advanced), and ANN (deep) models have achieved higher performance in predicting the compressive strength of HPC. From the overall analysis of performance, accuracy, Taylor plot, accuracy metric, regression error characteristics curve, Anderson-Darling, Wilcoxon, Uncertainty, and reliability, it has been observed that model CS4 based on the ensemble tree has been recognized as an optimum performance model with higher performance, i.e., a correlation coefficient of 0.9352, root mean square error of 5.76 MPa, and mean absolute error of 4.1069 MPa. The present study also reveals that multicollinearity affects the prediction accuracy of Gaussian process regression, decision tree, multilinear regression, and adaptive boosting regressor models, novel research in compressive strength prediction of HPC. The cosine sensitivity analysis reveals that the prediction of compressive strength of HPC is highly affected by cement content, fine aggregate, coarse aggregate, and water content.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

A Correction of East Asian Summer Precipitation Simulated by PNU/CME CGCM Using Multiple Linear Regression (다중 선형 회귀를 이용한 PNU/CME CGCM의 동아시아 여름철 강수예측 보정 연구)

  • Hwang, Yoon-Jeong;Ahn, Joong-Bae
    • Journal of the Korean earth science society
    • /
    • v.28 no.2
    • /
    • pp.214-226
    • /
    • 2007
  • Because precipitation is influenced by various atmospheric variables, it is highly nonlinear. Although precipitation predicted by a dynamic model can be corrected by using a nonlinear Artificial Neural Network, this approach has limits such as choices of the initial weight, local minima and the number of neurons, etc. In the present paper, we correct simulated precipitation by using a multiple linear regression (MLR) method, which is simple and widely used. First of all, Ensemble hindcast is conducted by the PNU/CME Coupled General Circulation Model (CGCM) (Park and Ahn, 2004) for the period from April to August in 1979-2005. MLR is applied to precipitation simulated by PNU/CME CGCM for the months of June (lead 2), July (lead 3), August (lead 4) and seasonal mean JJA (from June to August) of the Northeast Asian region including the Korean Peninsula $(110^{\circ}-145^{\circ}E,\;25-55^{\circ}N)$. We build the MLR model using a linear relationship between observed precipitation and the hindcasted results from the PNU/CME CGCM. The predictor variables selected from CGCM are precipitation, 500 hPa vertical velocity, 200 hPa divergence, surface air temperature and others. After performing a leave-oneout cross validation, the results are compared with the PNU/CME CGCM's. The results including Heidke skill scores demonstrate that the MLR corrected results have better forecasts than the direct CGCM result for rainfall.

A Korean Community-based Question Answering System Using Multiple Machine Learning Methods (다중 기계학습 방법을 이용한 한국어 커뮤니티 기반 질의-응답 시스템)

  • Kwon, Sunjae;Kim, Juae;Kang, Sangwoo;Seo, Jungyun
    • Journal of KIISE
    • /
    • v.43 no.10
    • /
    • pp.1085-1093
    • /
    • 2016
  • Community-based Question Answering system is a system which provides answers for each question from the documents uploaded on web communities. In order to enhance the capacity of question analysis, former methods have developed specific rules suitable for a target region or have applied machine learning to partial processes. However, these methods incur an excessive cost for expanding fields or lead to cases in which system is overfitted for a specific field. This paper proposes a multiple machine learning method which automates the overall process by adapting appropriate machine learning in each procedure for efficient processing of community-based Question Answering system. This system can be divided into question analysis part and answer selection part. The question analysis part consists of the question focus extractor, which analyzes the focused phrases in questions and uses conditional random fields, and the question type classifier, which classifies topics of questions and uses support vector machine. In the answer selection part, the we trains weights that are used by the similarity estimation models through an artificial neural network. Also these are a number of cases in which the results of morphological analysis are not reliable for the data uploaded on web communities. Therefore, we suggest a method that minimizes the impact of morphological analysis by using character features in the stage of question analysis. The proposed system outperforms the former system by showing a Mean Average Precision criteria of 0.765 and R-Precision criteria of 0.872.

Very Short- and Long-Term Prediction Method for Solar Power (초 장단기 통합 태양광 발전량 예측 기법)

  • Mun Seop Yun;Se Ryung Lim;Han Seung Jang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1143-1150
    • /
    • 2023
  • The global climate crisis and the implementation of low-carbon policies have led to a growing interest in renewable energy and a growing number of related industries. Among them, solar power is attracting attention as a representative eco-friendly energy that does not deplete and does not emit pollutants or greenhouse gases. As a result, the supplement of solar power facility is increasing all over the world. However, solar power is easily affected by the environment such as geography and weather, so accurate solar power forecast is important for stable operation and efficient management. However, it is very hard to predict the exact amount of solar power using statistical methods. In addition, the conventional prediction methods have focused on only short- or long-term prediction, which causes to take long time to obtain various prediction models with different prediction horizons. Therefore, this study utilizes a many-to-many structure of a recurrent neural network (RNN) to integrate short-term and long-term predictions of solar power generation. We compare various RNN-based very short- and long-term prediction methods for solar power in terms of MSE and R2 values.