• Title/Summary/Keyword: Robust parameter set selection

Search Result 4, Processing Time 0.021 seconds

Robust Cross Validation Score

  • Park, Dong-Ryeon
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.2
    • /
    • pp.413-423
    • /
    • 2005
  • Consider the problem of estimating the underlying regression function from a set of noisy data which is contaminated by a long tailed error distribution. There exist several robust smoothing techniques and these are turned out to be very useful to reduce the influence of outlying observations. However, no matter what kind of robust smoother we use, we should choose the smoothing parameter and relatively less attention has been made for the robust bandwidth selection method. In this paper, we adopt the idea of robust location parameter estimation technique and propose the robust cross validation score functions.

Robust parameter set selection of unsteady flow model using Pareto optimums and minimax regret approach (파레토 최적화와 최소최대 후회도 방법을 이용한 부정류 계산모형의 안정적인 매개변수 추정)

  • Li, Li;Chung, Eun-Sung;Jun, Kyung Soo
    • Journal of Korea Water Resources Association
    • /
    • v.50 no.3
    • /
    • pp.191-200
    • /
    • 2017
  • A robust parameter set (ROPS) selection framework for an unsteady flow model was developed by combining Pareto optimums obtained by outcomes of model calibration using multi-site observations with the minimax regret approach (MRA). The multi-site calibration problem which is a multi-objective problem was solved by using an aggregation approach which aggregates the weighted criteria related to different sites into one measure, and then performs a large number of individual optimization runs with different weight combinations to obtain Pareto solutions. Roughness parameter structure which can describe the variation of Manning's n with discharges and sub-reaches was proposed and the related coefficients were optimized as model parameters. By applying the MRA which is a decision criterion, the Pareto solutions were ranked based on the obtained regrets related to each Pareto solution, and the top-rated one due to the lowest aggregated regrets of both calibration and validation was determined as the only ROPS. It was found that the determination of variable roughness and the corresponding standardized RMSEs at the two gauging stations varies considerably depending on the combinations of weights on the two sites. This method can provide the robust parameter set for the multi-site calibration problems in hydrologic and hydraulic models.

A novel smart criterion of grey-prediction control for practical applications

  • Z.Y. Chen;Ruei-yuan Wang;Yahui Meng;Timothy Chen
    • Smart Structures and Systems
    • /
    • v.31 no.1
    • /
    • pp.69-78
    • /
    • 2023
  • The purpose of this paper is to develop a scalable grey predictive controller with unavoidable random delays. Grey prediction is proposed to solve problems caused by incorrect parameter selection and to eliminate the effects of dynamic coupling between degrees of freedom (DOFs) in nonlinear systems. To address the stability problem, this study develops an improved gray-predictive adaptive fuzzy controller, which can not only solve the implementation problem by determining the stability of the system, but also apply the Linear Matrix Inequality (LMI) law to calculate Fuzzy change parameters. Fuzzy logic controllers manipulate robotic systems to improve their control performance. The stability is proved using Lyapunov stability theorem. In this article, the authors compare different controllers and the proposed predictive controller can significantly reduce the vibration of offshore platforms while keeping the required control force within an ideal small range. This paper presents a robust fuzzy control design that uses a model-based approach to overcome the effects of modeling errors. To guarantee the asymptotic stability of large nonlinear systems with multiple lags, the stability criterion is derived from the direct Lyapunov method. Based on this criterion and a distributed control system, a set of model-based fuzzy controllers is synthesized to stabilize large-scale nonlinear systems with multiple delays.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.