• 제목/요약/키워드: Ensemble model

검색결과 662건 처리시간 0.029초

Ensemble Model Output Statistics를 이용한 평창지역 다중 모델 앙상블 결합 및 보정 (A Combination and Calibration of Multi-Model Ensemble of PyeongChang Area Using Ensemble Model Output Statistics)

  • 황유선;김찬수
    • 대기
    • /
    • 제28권3호
    • /
    • pp.247-261
    • /
    • 2018
  • The objective of this paper is to compare probabilistic temperature forecasts from different regional and global ensemble prediction systems over PyeongChang area. A statistical post-processing method is used to take into account combination and calibration of forecasts from different numerical prediction systems, laying greater weight on ensemble model that exhibits the best performance. Observations for temperature were obtained from the 30 stations in PyeongChang and three different ensemble forecasts derived from the European Centre for Medium-Range Weather Forecasts, Ensemble Prediction System for Global and Limited Area Ensemble Prediction System that were obtained between 1 May 2014 and 18 March 2017. Prior to applying to the post-processing methods, reliability analysis was conducted to identify the statistical consistency of ensemble forecasts and corresponding observations. Then, ensemble model output statistics and bias-corrected methods were applied to each raw ensemble model and then proposed weighted combination of ensembles. The results showed that the proposed methods provide improved performances than raw ensemble mean. In particular, multi-model forecast based on ensemble model output statistics was superior to the bias-corrected forecast in terms of deterministic prediction.

사례 선택 기법을 활용한 앙상블 모형의 성능 개선 (Improving an Ensemble Model Using Instance Selection Method)

  • 민성환
    • 산업경영시스템학회지
    • /
    • 제39권1호
    • /
    • pp.105-115
    • /
    • 2016
  • Ensemble classification involves combining individually trained classifiers to yield more accurate prediction, compared with individual models. Ensemble techniques are very useful for improving the generalization ability of classifiers. The random subspace ensemble technique is a simple but effective method for constructing ensemble classifiers; it involves randomly drawing some of the features from each classifier in the ensemble. The instance selection technique involves selecting critical instances while deleting and removing irrelevant and noisy instances from the original dataset. The instance selection and random subspace methods are both well known in the field of data mining and have proven to be very effective in many applications. However, few studies have focused on integrating the instance selection and random subspace methods. Therefore, this study proposed a new hybrid ensemble model that integrates instance selection and random subspace techniques using genetic algorithms (GAs) to improve the performance of a random subspace ensemble model. GAs are used to select optimal (or near optimal) instances, which are used as input data for the random subspace ensemble model. The proposed model was applied to both Kaggle credit data and corporate credit data, and the results were compared with those of other models to investigate performance in terms of classification accuracy, levels of diversity, and average classification rates of base classifiers in the ensemble. The experimental results demonstrated that the proposed model outperformed other models including the single model, the instance selection model, and the original random subspace ensemble model.

유전자 알고리즘 기반 통합 앙상블 모형 (Genetic Algorithm based Hybrid Ensemble Model)

  • 민성환
    • Journal of Information Technology Applications and Management
    • /
    • 제23권1호
    • /
    • pp.45-59
    • /
    • 2016
  • An ensemble classifier is a method that combines output of multiple classifiers. It has been widely accepted that ensemble classifiers can improve the prediction accuracy. Recently, ensemble techniques have been successfully applied to the bankruptcy prediction. Bagging and random subspace are the most popular ensemble techniques. Bagging and random subspace have proved to be very effective in improving the generalization ability respectively. However, there are few studies which have focused on the integration of bagging and random subspace. In this study, we proposed a new hybrid ensemble model to integrate bagging and random subspace method using genetic algorithm for improving the performance of the model. The proposed model is applied to the bankruptcy prediction for Korean companies and compared with other models in this study. The experimental results showed that the proposed model performs better than the other models such as the single classifier, the original ensemble model and the simple hybrid model.

랜덤화 배깅을 이용한 재무 부실화 예측 (Randomized Bagging for Bankruptcy Prediction)

  • 민성환
    • 한국IT서비스학회지
    • /
    • 제15권1호
    • /
    • pp.153-166
    • /
    • 2016
  • Ensemble classification is an approach that combines individually trained classifiers in order to improve prediction accuracy over individual classifiers. Ensemble techniques have been shown to be very effective in improving the generalization ability of the classifier. But base classifiers need to be as accurate and diverse as possible in order to enhance the generalization abilities of an ensemble model. Bagging is one of the most popular ensemble methods. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. In this study we proposed a new bagging variant ensemble model, Randomized Bagging (RBagging) for improving the standard bagging ensemble model. The proposed model was applied to the bankruptcy prediction problem using a real data set and the results were compared with those of the other models. The experimental results showed that the proposed model outperformed the standard bagging model.

앙상블 칼만 필터를 이용한 태풍 우쿵 (200610) 예측과 앙상블 민감도 분석 (Typhoon Wukong (200610) Prediction Based on The Ensemble Kalman Filter and Ensemble Sensitivity Analysis)

  • 박종임;김현미
    • 대기
    • /
    • 제20권3호
    • /
    • pp.287-306
    • /
    • 2010
  • An ensemble Kalman filter (EnKF) with Weather Research and Forecasting (WRF) Model is applied for Typhoon Wukong (200610) to investigate the performance of ensemble forecasts depending on experimental configurations of the EnKF. In addition, the ensemble sensitivity analysis is applied to the forecast and analysis ensembles generated in EnKF, to investigate the possibility of using the ensemble sensitivity analysis as the adaptive observation guidance. Various experimental configurations are tested by changing model error, ensemble size, assimilation time window, covariance relaxation, and covariance localization in EnKF. First of all, experiments using different physical parameterization scheme for each ensemble member show less root mean square error compared to those using single physics for all the forecast ensemble members, which implies that considering the model error is beneficial to get better forecasts. A larger number of ensembles are also beneficial than a smaller number of ensembles. For the assimilation time window, the experiment using less frequent window shows better results than that using more frequent window, which is associated with the availability of observational data in this study. Therefore, incorporating model error, larger ensemble size, and less frequent assimilation window into the EnKF is beneficial to get better prediction of Typhoon Wukong (200610). The covariance relaxation and localization are relatively less beneficial to the forecasts compared to those factors mentioned above. The ensemble sensitivity analysis shows that the sensitive regions for adaptive observations can be determined by the sensitivity of the forecast measure of interest to the initial ensembles. In addition, the sensitivities calculated by the ensemble sensitivity analysis can be explained by dynamical relationships established among wind, temperature, and pressure.

Hierarchical Bayesian Model을 이용한 GCMs 의 최적 Multi-Model Ensemble 모형 구축 (Optimal Multi-Model Ensemble Model Development Using Hierarchical Bayesian Model Based)

  • 권현한;민영미
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2009년도 학술발표회 초록집
    • /
    • pp.1147-1151
    • /
    • 2009
  • In this study, we address the problem of producing probability forecasts of summer seasonal rainfall, on the basis of Hindcast experiments from a ensemble of GCMs(cwb, gcps, gdaps, metri, msc_gem, msc_gm2, msc_gm3, msc_sef and ncep). An advanced Hierarchical Bayesian weighting scheme is developed and used to combine nine GCMs seasonal hindcast ensembles. Hindcast period is 23 years from 1981 to 2003. The simplest approach for combining GCM forecasts is to weight each model equally, and this approach is referred to as pooled ensemble. This study proposes a more complex approach which weights the models spatially and seasonally based on past model performance for rainfall. The Bayesian approach to multi-model combination of GCMs determines the relative weights of each GCM with climatology as the prior. The weights are chosen to maximize the likelihood score of the posterior probabilities. The individual GCM ensembles, simple poolings of three and six models, and the optimally combined multimodel ensemble are compared.

  • PDF

An ensemble learning based Bayesian model updating approach for structural damage identification

  • Guangwei Lin;Yi Zhang;Enjian Cai;Taisen Zhao;Zhaoyan Li
    • Smart Structures and Systems
    • /
    • 제32권1호
    • /
    • pp.61-81
    • /
    • 2023
  • This study presents an ensemble learning based Bayesian model updating approach for structural damage diagnosis. In the developed framework, the structure is initially decomposed into a set of substructures. The autoregressive moving average (ARMAX) model is established first for structural damage localization based structural motion equation. The wavelet packet decomposition is utilized to extract the damage-sensitive node energy in different frequency bands for constructing structural surrogate models. Four methods, including Kriging predictor (KRG), radial basis function neural network (RBFNN), support vector regression (SVR), and multivariate adaptive regression splines (MARS), are selected as candidate structural surrogate models. These models are then resampled by bootstrapping and combined to obtain an ensemble model by probabilistic ensemble. Meanwhile, the maximum entropy principal is adopted to search for new design points for sample space updating, yielding a more robust ensemble model. Through the iterations, a framework of surrogate ensemble learning based model updating with high model construction efficiency and accuracy is proposed. The specificities of the method are discussed and investigated in a case study.

Leave-one-out Bayesian model averaging for probabilistic ensemble forecasting

  • Kim, Yongdai;Kim, Woosung;Ohn, Ilsang;Kim, Young-Oh
    • Communications for Statistical Applications and Methods
    • /
    • 제24권1호
    • /
    • pp.67-80
    • /
    • 2017
  • Over the last few decades, ensemble forecasts based on global climate models have become an important part of climate forecast due to the ability to reduce uncertainty in prediction. Moreover in ensemble forecast, assessing the prediction uncertainty is as important as estimating the optimal weights, and this is achieved through a probabilistic forecast which is based on the predictive distribution of future climate. The Bayesian model averaging has received much attention as a tool of probabilistic forecasting due to its simplicity and superior prediction. In this paper, we propose a new Bayesian model averaging method for probabilistic ensemble forecasting. The proposed method combines a deterministic ensemble forecast based on a multivariate regression approach with Bayesian model averaging. We demonstrate that the proposed method is better in prediction than the standard Bayesian model averaging approach by analyzing monthly average precipitations and temperatures for ten cities in Korea.

Performance Comparison Analysis of Artificial Intelligence Models for Estimating Remaining Capacity of Lithium-Ion Batteries

  • Kyu-Ha Kim;Byeong-Soo Jung;Sang-Hyun Lee
    • International Journal of Advanced Culture Technology
    • /
    • 제11권3호
    • /
    • pp.310-314
    • /
    • 2023
  • The purpose of this study is to predict the remaining capacity of lithium-ion batteries and evaluate their performance using five artificial intelligence models, including linear regression analysis, decision tree, random forest, neural network, and ensemble model. We is in the study, measured Excel data from the CS2 lithium-ion battery was used, and the prediction accuracy of the model was measured using evaluation indicators such as mean square error, mean absolute error, coefficient of determination, and root mean square error. As a result of this study, the Root Mean Square Error(RMSE) of the linear regression model was 0.045, the decision tree model was 0.038, the random forest model was 0.034, the neural network model was 0.032, and the ensemble model was 0.030. The ensemble model had the best prediction performance, with the neural network model taking second place. The decision tree model and random forest model also performed quite well, and the linear regression model showed poor prediction performance compared to other models. Therefore, through this study, ensemble models and neural network models are most suitable for predicting the remaining capacity of lithium-ion batteries, and decision tree and random forest models also showed good performance. Linear regression models showed relatively poor predictive performance. Therefore, it was concluded that it is appropriate to prioritize ensemble models and neural network models in order to improve the efficiency of battery management and energy systems.

로렌쯔-95 모델을 이용한 앙상블 섭동 비교: 브레드벡터, 직교 브레드벡터와 앙상블 칼만 필터 (Comparison of Ensemble Perturbations using Lorenz-95 Model: Bred vectors, Orthogonal Bred vectors and Ensemble Transform Kalman Filter(ETKF))

  • 정관영;바커 데일;문선옥;전은희;이희상
    • 대기
    • /
    • 제17권3호
    • /
    • pp.217-230
    • /
    • 2007
  • Using the Lorenz-95 simple model, which can simulate many atmospheric characteristics, we compare the performance of ensemble strategies such as bred vectors, the bred vectors rotated (to be orthogonal to each bred member), and the Ensemble Transform Kalman Filter (ETKF). The performance metrics used are the RMSE of ensemble means, the ratio of RMS error of ensemble mean to the spread of ensemble, rank histograms to see if the ensemble member can well represent the true probability density function (pdf), and the distribution of eigen-values of the forecast ensemble, which can provide useful information on the independence of each member. In the meantime, the orthogonal bred vectors can achieve the considerable progress comparing the bred vectors in all aspects of RMSE, spread, and independence of members. When we rotate the bred vectors for orthogonalization, the improvement rate for the spread of ensemble is almost as double as that for RMS error of ensemble mean compared to the non-rotated bred vectors on a simple model. It appears that the result is consistent with the tentative test on the operational model in KMA. In conclusion, ETKF is superior to the other two methods in all terms of the assesment ways we used when it comes to ensemble prediction. But we cannot decide which perturbation strategy is better in aspect of the structure of the background error covariance. It appears that further studies on the best perturbation way for hybrid variational data assimilation to consider an error-of-the-day(EOTD) should be needed.