• Title/Summary/Keyword: Ensemble Scheme

Search Result 52, Processing Time 0.025 seconds

Application of frequency domain analysis for generation of seismic floor response spectra

  • Ghosh, A.K.
    • Structural Engineering and Mechanics
    • /
    • v.10 no.1
    • /
    • pp.17-26
    • /
    • 2000
  • This paper presents a case study with a multi-degree-of-freedom (MDOF) system where the Floor Response Spectra (FRS) have been derived from a large ensemble of ground motion accelerograms. The FRS are evaluated by the frequency response function which is calculated numerically. The advantage of this scheme over a repetitive time-history analysis of the entire structure for each accelerogram of the set has been highlighted. The present procedure permits generation of FRS with a specified probability of exceedence.

Real-time prediction on the slurry concentration of cutter suction dredgers using an ensemble learning algorithm

  • Han, Shuai;Li, Mingchao;Li, Heng;Tian, Huijing;Qin, Liang;Li, Jinfeng
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.463-481
    • /
    • 2020
  • Cutter suction dredgers (CSDs) are widely used in various dredging constructions such as channel excavation, wharf construction, and reef construction. During a CSD construction, the main operation is to control the swing speed of cutter to keep the slurry concentration in a proper range. However, the slurry concentration cannot be monitored in real-time, i.e., there is a "time-lag effect" in the log of slurry concentration, making it difficult for operators to make the optimal decision on controlling. Concerning this issue, a solution scheme that using real-time monitored indicators to predict current slurry concentration is proposed in this research. The characteristics of the CSD monitoring data are first studied, and a set of preprocessing methods are presented. Then we put forward the concept of "index class" to select the important indices. Finally, an ensemble learning algorithm is set up to fit the relationship between the slurry concentration and the indices of the index classes. In the experiment, log data over seven days of a practical dredging construction is collected. For comparison, the Deep Neural Network (DNN), Long Short Time Memory (LSTM), Support Vector Machine (SVM), Random Forest (RF), Gradient Boosting Decision Tree (GBDT), and the Bayesian Ridge algorithm are tried. The results show that our method has the best performance with an R2 of 0.886 and a mean square error (MSE) of 5.538. This research provides an effective way for real-time predicting the slurry concentration of CSDs and can help to improve the stationarity and production efficiency of dredging construction.

  • PDF

Assimilation of Satellite-Based Soil Moisture (SMAP) in KMA GloSea6: The Results of the First Preliminary Experiment (기상청 GloSea의 위성관측 기반 토양수분(SMAP) 동화: 예비 실험 분석)

  • Ji, Hee-Sook;Hwang, Seung-On;Lee, Johan;Hyun, Yu-Kyung;Ryu, Young;Boo, Kyung-On
    • Atmosphere
    • /
    • v.32 no.4
    • /
    • pp.395-409
    • /
    • 2022
  • A new soil moisture initialization scheme is applied to the Korea Meteorological Administration (KMA) Global Seasonal forecasting system version 6 (GloSea6). It is designed to ingest the microwave soil moisture retrievals from Soil Moisture Active Passive (SMAP) radiometer using the Local Ensemble Transform Kalman Filter (LETKF). In this technical note, we describe the procedure of the newly-adopted initialization scheme, the change of soil moisture states by assimilation, and the forecast skill differences for the surface temperature and precipitation by GloSea6 simulation from two preliminary experiments. Based on a 4-year analysis experiment, the soil moisture from the land-surface model of current operational GloSea6 is found to be drier generally comparing to SMAP observation. LETKF data assimilation shows a tendency toward being wet globally, especially in arid area such as deserts and Tibetan Plateau. Also, it increases soil moisture analysis increments in most soil levels of wetness in land than current operation. The other experiment of GloSea6 forecast with application of the new initialization system for the heat wave case in 2020 summer shows that the memory of soil moisture anomalies obtained by the new initialization system is persistent throughout the entire forecast period of three months. However, averaged forecast improvements are not substantial and mixed over Eurasia during the period of forecast: forecast skill for the precipitation improved slightly but for the surface air temperature rather degraded. Our preliminary results suggest that additional elaborate developments in the soil moisture initialization are still required to improve overall forecast skills.

Multiscale approach to predict the effective elastic behavior of nanoparticle-reinforced polymer composites

  • Kim, B.R.;Pyo, S.H.;Lemaire, G.;Lee, H.K.
    • Interaction and multiscale mechanics
    • /
    • v.4 no.3
    • /
    • pp.173-185
    • /
    • 2011
  • A multiscale modeling scheme that addresses the influence of the nanoparticle size in nanocomposites consisting of nano-sized spherical particles embedded in a polymer matrix is presented. A micromechanics-based constitutive model for nanoparticle-reinforced polymer composites is derived by incorporating the Eshelby tensor considering the interface effects (Duan et al. 2005a) into the ensemble-volume average method (Ju and Chen 1994). A numerical investigation is carried out to validate the proposed micromechanics-based constitutive model, and a parametric study on the interface moduli is conducted to investigate the effect of interface moduli on the overall behavior of the composites. In addition, molecular dynamics (MD) simulations are performed to determine the mechanical properties of the nanoparticles and polymer. Finally, the overall elastic moduli of the nanoparticle-reinforced polymer composites are estimated using the proposed multiscale approach combining the ensemble-volume average method and the MD simulation. The predictive capability of the proposed multiscale approach has been demonstrated through the multiscale numerical simulations.

Robust Digital Watermarking for High-definition Video using Steerable Pyramid Transform, Two Dimensional Fast Fourier Transform and Ensemble Position-based Error Correcting

  • Jin, Xun;Kim, JongWeon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3438-3454
    • /
    • 2018
  • In this paper, we propose a robust blind watermarking scheme for high-definition video. In the embedding process, luminance component of each frame is transformed by 2-dimensional fast Fourier transform (2D FFT). A secret key is used to generate a matrix of random numbers for the security of watermark information. The matrix is transformed by inverse steerable pyramid transform (SPT). We embed the watermark into the low and mid-frequency of 2D FFT coefficients with the transformed matrix. In the extraction process, the 2D FFT coefficients of each frame and the transformed matrix are transformed by SPT respectively, to produce two oriented sub-bands. We extract the watermark from each frame by cross-correlating two oriented sub-bands. If a video is degraded by some attacks, the watermarks of frames contain some errors. Thus, we use an ensemble position-based error correcting algorithm to estimate the errors and correct them. The experimental results show that the proposed watermarking algorithm is imperceptible and moreover is robust against various attacks. After embedding 64 bits of watermark into each frame, the average peak signal-to-noise ratio between original frames and embedded frames is 45.7 dB.

A Study on reduction of out-of-band radiation in T-DMB (T-DMB 시스템 대역외 방사 저감기법 고찰)

  • Bang, Keukjoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.2
    • /
    • pp.146-150
    • /
    • 2017
  • In OFDM system, the out-of-band radiation of ensemble interferes with the adjacent ensemble and result in the reduction of receiving performance. So the reduction scheme of out-of-band radiation is very important. In this paper, a time-domain windowing method to reduce the out-of-band radiation is considered to DAB systems. We adopt the considered method to T-DMB(DAB mode-1) and AT-DMB Systems, and we get about 3-dB gains of out-of-band radiation. And also we show that the considered method doesn't reduce the BER performance.

A Comparative Analysis of Ensemble Learning-Based Classification Models for Explainable Term Deposit Subscription Forecasting (설명 가능한 정기예금 가입 여부 예측을 위한 앙상블 학습 기반 분류 모델들의 비교 분석)

  • Shin, Zian;Moon, Jihoon;Rho, Seungmin
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.97-117
    • /
    • 2021
  • Predicting term deposit subscriptions is one of representative financial marketing in banks, and banks can build a prediction model using various customer information. In order to improve the classification accuracy for term deposit subscriptions, many studies have been conducted based on machine learning techniques. However, even if these models can achieve satisfactory performance, utilizing them is not an easy task in the industry when their decision-making process is not adequately explained. To address this issue, this paper proposes an explainable scheme for term deposit subscription forecasting. For this, we first construct several classification models using decision tree-based ensemble learning methods, which yield excellent performance in tabular data, such as random forest, gradient boosting machine (GBM), extreme gradient boosting (XGB), and light gradient boosting machine (LightGBM). We then analyze their classification performance in depth through 10-fold cross-validation. After that, we provide the rationale for interpreting the influence of customer information and the decision-making process by applying Shapley additive explanation (SHAP), an explainable artificial intelligence technique, to the best classification model. To verify the practicality and validity of our scheme, experiments were conducted with the bank marketing dataset provided by Kaggle; we applied the SHAP to the GBM and LightGBM models, respectively, according to different dataset configurations and then performed their analysis and visualization for explainable term deposit subscriptions.

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

Impact of Cumulus Parameterization Schemes with Different Horizontal Grid Sizes on Prediction of Heavy Rainfall (적운 모수화 방안이 고해상도 집중호우 예측에 미치는 영향)

  • Lee, Jae-Bok;Lee, Dong-Kyou
    • Atmosphere
    • /
    • v.21 no.4
    • /
    • pp.391-404
    • /
    • 2011
  • This study investigates the impact of cumulus parameterization scheme (CPS) with different horizontal grid sizes on the simulation of the local heavy rainfall case over the Korean Peninsula. The Weather Research and Forecasting (WRF)-based real-time forecast system of the Joint Center for High-impact Weather and Climate Research (JHWC) is used. Three CPSs are used for sensitivity experiments: the BMJ (Betts-Miller-Janjic), GD (Grell-Devenyi ensemble), and KF (Kain-Fritsch) CPSs. The heavy rainfall case selected in this study is characterized by low-level jet and low-level transport of warm and moist air. In 27-km simulations (DM1), simulated precipitation is overestimated in the experiment with BMJ scheme, and it is underestimated with GD scheme. The experiment with KF scheme shows well-developed precipitation cells in the southern and the central region of the Korean Peninsula, which are similar to the observations. All schemes show wet bias and cold bias in the lower troposphere. The simulated rainfall in 27-km horizontal resolution has influence on rainfall forecast in 9-km horizontal resolution, so the statements on 27-km horizontal resolution can be applied to 9-km horizontal resolution. In the sensitivity experiments of CPS for DM3 (3-km resolution), the experiment with BMJ scheme shows better heavy rainfall forecast than the other experiments. The experiments with CPS in 3-km horizontal resolution improve rainfall forecasts compared to the experiments without CPS, especially in rainfall distribution. The experiments with CPS show lower LCL(Lifted Condensation Level) than those without CPS at the maximum rainfall point, and weaker vertical velocity is simulated in the experiments with CPS compared to the experiments without CPS. It means that CPS suppresses convective instability and influences mainly convective rainfall. Consequently, heavy rainfall simulation with BMJ CPS is better than the other CPSs, and even in 3-km horizontal resolution, CPS should be applied to control convective instability. This conclusion can be generalized by conducting more experiments for a variety of cases over the Korean Peninsula.

Sensor Fault Detection Scheme based on Deep Learning and Support Vector Machine (딥 러닝 및 서포트 벡터 머신기반 센서 고장 검출 기법)

  • Yang, Jae-Wan;Lee, Young-Doo;Koo, In-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.2
    • /
    • pp.185-195
    • /
    • 2018
  • As machines have been automated in the field of industries in recent years, it is a paramount importance to manage and maintain the automation machines. When a fault occurs in sensors attached to the machine, the machine may malfunction and further, a huge damage will be caused in the process line. To prevent the situation, the fault of sensors should be monitored, diagnosed and classified in a proper way. In the paper, we propose a sensor fault detection scheme based on SVM and CNN to detect and classify typical sensor errors such as erratic, drift, hard-over, spike, and stuck faults. Time-domain statistical features are utilized for the learning and testing in the proposed scheme, and the genetic algorithm is utilized to select the subset of optimal features. To classify multiple sensor faults, a multi-layer SVM is utilized, and ensemble technique is used for CNN. As a result, the SVM that utilizes a subset of features selected by the genetic algorithm provides better performance than the SVM that utilizes all the features. However, the performance of CNN is superior to that of the SVM.