• Title/Summary/Keyword: Mean Squared Error, MSE

Search Result 174, Processing Time 0.028 seconds

Observed Data Oriented Bispectral Estimation of Stationary Non-Gaussian Random Signals - Automatic Determination of Smoothing Bandwidth of Bispectral Windows

  • Sasaki, K.;Shirakata, T.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.502-507
    • /
    • 2003
  • Toward the development of practical methods for observed data oriented bispectral estimation, an automatic means for determining the smoothing bandwidth of bispectral windows is proposed, that can also provide an associated optimum bispectral estimate of stationary non-Gaussian signals, systematically only from an observed time series datum of finite length. For the conventional non-parametric bispectral estimation, the MSE (mean squared error) of the normalized estimate is reviewed under a certain mixing condition and sufficient data length, mainly from the viewpoint of the inverse relation between its bias and variance with respect to the smoothing bandwidth. Based on the fundamental relation, a systematic method not only for determining the bandwidth, but also for obtaining the optimum bispectral estimate is presented by newly introducing a MSE evaluation index of the estimate only from an observed time series datum of finite length. The effectiveness and fundamental features of the proposed method are illustrated by the basic results of numerical experiments.

  • PDF

Supervised learning and frequency domain averaging-based adaptive channel estimation scheme for filterbank multicarrier with offset quadrature amplitude modulation

  • Singh, Vibhutesh Kumar;Upadhyay, Nidhi;Flanagan, Mark;Cardiff, Barry
    • ETRI Journal
    • /
    • v.43 no.6
    • /
    • pp.966-977
    • /
    • 2021
  • Filterbank multicarrier with offset quadrature amplitude modulation (FBMC-OQAM) is an attractive alternative to the orthogonal frequency division multiplexing (OFDM) modulation technique. In comparison with OFDM, the FBMC-OQAM signal has better spectral confinement and higher spectral efficiency and tolerance to synchronization errors, primarily due to per-subcarrier filtering using a frequency-time localized prototype filter. However, the filtering process introduces intrinsic interference among the symbols and complicates channel estimation (CE). An efficient way to improve the CE in FBMC-OQAM is using a technique known as windowed frequency domain averaging (FDA); however, it requires a priori knowledge of the window length parameter which is set based on the channel's frequency selectivity (FS). As the channel's FS is not fixed and not a priori known, we propose a k-nearest neighbor-based machine learning algorithm to classify the FS and decide on the FDA's window length. A comparative theoretical analysis of the mean-squared error (MSE) is performed to prove the proposed CE scheme's effectiveness, validated through extensive simulations. The adaptive CE scheme is shown to yield a reduction in CE-MSE and improved bit error rates compared with the popular preamble-based CE schemes for FBMC-OQAM, without a priori knowledge of channel's frequency selectivity.

A Comparative Study of Technological Forecasting Methods with the Case of Main Battle Tank by Ranking Efficient Units in DEA (DEA기반 순위선정 절차를 활용한 주력전차의 기술예측방법 비교연구)

  • Kim, Jae-Oh;Kim, Jae-Hee;Kim, Sheung-Kown
    • Journal of the military operations research society of Korea
    • /
    • v.33 no.2
    • /
    • pp.61-73
    • /
    • 2007
  • We examined technological forecasting of extended TFDEA(Technological Forecasting with Data Envelopment Analysis) and thereby apply the extended method to the technological forecasting problem of main battle tank. The TFDEA has the possibility of using comparatively inefficient DMUs(Decision Making Units) because it is based on DEA(Data Envelopment Analysis), which usually leads to multiple efficient DMUs. Therefore, TFDEA may result in incorrect technological forecasting. Instead of using the simple DEA, we incorporated the concept of Super-efficiency, Cross-efficiency, and CCCA(Constrained Canonical Correlation Analysis) into the TFDEA respectively, and applied each method to the case study of main battle tank using verifiable practical data sets. The comparative analysis shows that the use of CCCA with TFDEA results in very comparable prediction accuracies with respect to MAE(Mean Absolute Error), MSE(Mean Squared Error), and RMSE(Root Mean Squared Error) than using the concept of Super-efficiency and Cross-efficiency.

Bayes estimation of entropy of exponential distribution based on multiply Type II censored competing risks data

  • Lee, Kyeongjun;Cho, Youngseuk
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.6
    • /
    • pp.1573-1582
    • /
    • 2015
  • In lifetime data analysis, it is generally known that the lifetimes of test items may not be recorded exactly. There are also situations wherein the withdrawal of items prior to failure is prearranged in order to decrease the time or cost associated with experience. Moreover, it is generally known that more than one cause or risk factor may be present at the same time. Therefore, analysis of censored competing risks data are needed. In this article, we derive the Bayes estimators for the entropy function under the exponential distribution with an unknown scale parameter based on multiply Type II censored competing risks data. The Bayes estimators of entropy function for the exponential distribution with multiply Type II censored competing risks data under the squared error loss function (SELF), precautionary loss function (PLF) and DeGroot loss function (DLF) are provided. Lindley's approximate method is used to compute these estimators.We compare the proposed Bayes estimators in the sense of the mean squared error (MSE) for various multiply Type II censored competing risks data. Finally, a real data set has been analyzed for illustrative purposes.

A Robust Design of Response Surface Methods (반응표면방법론에서의 강건한 실험계획)

  • 임용빈;오만숙
    • The Korean Journal of Applied Statistics
    • /
    • v.15 no.2
    • /
    • pp.395-403
    • /
    • 2002
  • In the third phase of the response surface methods, the first-order model is assumed and the curvature of the response surface is checked with a fractional factorial design augmented by centre runs. We further assume that a true model is a quadratic polynomial. To choose an optimal design, Box and Draper(1959) suggested the use of an average mean squared error (AMSE), an average of MSE of y(x) over the region of interest R. The AMSE can be partitioned into the average prediction variance (APV) and average squared bias (ASB). Since AMSE is a function of design moments, region moments and a standardized vector of parameters, it is not possible to select the design that minimizes AMSE. As a practical alternative, Box and Draper(1959) proposed minimum bias design which minimize ASB and showed that factorial design points are shrunk toward the origin for a minimum bias design. In this paper we propose a robust AMSE design which maximizes the minimum efficiency of the design with respect to a standardized vector of parameters.

An Enhaced Channel Estimation Technique for MIMO OFDM Systems (MIMO OFDM 시스템을 위한 향상된 채널 추정 기법)

  • Shin Myeongcheol;Lee Hakju;Shim Seijoon;Lee Chungyong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.41 no.6 s.324
    • /
    • pp.9-15
    • /
    • 2004
  • In MIMO-OFDM systems, conventional channel estimation techniques using comb type training symbols give relatively large mean squared errors(MSEs) at the edge subcarriers. To reduce the MSEs at these subcarriers, a cyclic comb type training structure is proposed. In the proposed cyclic training structure, all types of training symbols are transmitted cyclically at each antenna. At the receiver, the channel frequency responses that are estimated using each training symbol are averaged with weights obtained from the corresponding MSEs. Computer simulations showed that the proposed cyclic training structure gives more SNR gain than the conventional training structure.

A comparison of neural networks to ols regression in process/quality control applications

  • Nam, Kyungdoo;Sanford, Clive C.;Jayakumar, Maliyakal D.
    • Korean Management Science Review
    • /
    • v.11 no.2
    • /
    • pp.133-146
    • /
    • 1994
  • This study compares the performance of neural networks and ordinary least squares regression with quality-control processes. We examine the applicability of neural networks because they do not require any assumptions regarding either the functional from of the underlying process or the distribution of errors. The coefficient of determination($R^2$), mean absolute deviation(MAD), and the mean squared error(MSE) metrics indicate that neural networks are a viable and can be a superior technique. We also demonstrate that an assessment of the magnitude of the neural notwork input layer cumulative weights can be used to determine the relative importance of predictor variables.

  • PDF

Performance Analysis of Maximum Zero-Error Probability Algorithm for Blind Equalization in Impulsive Noise Channels (충격성 잡음 채널의 블라인드 등화를 위한 최대 영-확률 알고리듬에 대한 성능 분석)

  • Kim, Nam-Yong
    • Journal of Internet Computing and Services
    • /
    • v.11 no.5
    • /
    • pp.1-8
    • /
    • 2010
  • This paper presentsthe performance study of blind equalizer algorithms for impulsive-noise environments based on Gaussian kernel and constant modulus error(CME). Constant modulus algorithm(CMA) based on CME and mean squared error(MSE) criterion fails in impulsive noise environment. Correntropy blind method recently introduced for impulsive-noise resistance has shown in PAM system not very satisfying results. It is revealed in theoretical and simulation analysis that the maximization of zero-error probability based on CME(MZEP-CME) originally proposed for Gaussian noise environments produces superior performance in impulsive noise channels as well. Gaussian kernel of MZEP-CME has a strong effect of becoming insensitive to the large differences between the power of impulse-infected outputs and the constant modulus value.

A New Metric for Evaluation of Forecasting Methods : Weighted Absolute and Cumulative Forecast Error (수요 예측 평가를 위한 가중절대누적오차지표의 개발)

  • Choi, Dea-Il;Ok, Chang-Soo
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.38 no.3
    • /
    • pp.159-168
    • /
    • 2015
  • Aggregate Production Planning determines levels of production, human resources, inventory to maximize company's profits and fulfill customer's demands based on demand forecasts. Since performance of aggregate production planning heavily depends on accuracy of given forecasting demands, choosing an accurate forecasting method should be antecedent for achieving a good aggregate production planning. Generally, typical forecasting error metrics such as MSE (Mean Squared Error), MAD (Mean Absolute Deviation), MAPE (Mean Absolute Percentage Error), and CFE (Cumulated Forecast Error) are utilized to choose a proper forecasting method for an aggregate production planning. However, these metrics are designed only to measure a difference between real and forecast demands and they are not able to consider any results such as increasing cost or decreasing profit caused by forecasting error. Consequently, the traditional metrics fail to give enough explanation to select a good forecasting method in aggregate production planning. To overcome this limitation of typical metrics for forecasting method this study suggests a new metric, WACFE (Weighted Absolute and Cumulative Forecast Error), to evaluate forecasting methods. Basically, the WACFE is designed to consider not only forecasting errors but also costs which the errors might cause in for Aggregate Production Planning. The WACFE is a product sum of cumulative forecasting error and weight factors for backorder and inventory costs. We demonstrate the effectiveness of the proposed metric by conducting intensive experiments with demand data sets from M3-competition. Finally, we showed that the WACFE provides a higher correlation with the total cost than other metrics and, consequently, is a better performance in selection of forecasting methods for aggregate production planning.

Comparative analysis of wavelet transform and machine learning approaches for noise reduction in water level data (웨이블릿 변환과 기계 학습 접근법을 이용한 수위 데이터의 노이즈 제거 비교 분석)

  • Hwang, Yukwan;Lim, Kyoung Jae;Kim, Jonggun;Shin, Minhwan;Park, Youn Shik;Shin, Yongchul;Ji, Bongjun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.209-223
    • /
    • 2024
  • In the context of the fourth industrial revolution, data-driven decision-making has increasingly become pivotal. However, the integrity of data analysis is compromised if data quality is not adequately ensured, potentially leading to biased interpretations. This is particularly critical for water level data, essential for water resource management, which often encounters quality issues such as missing values, spikes, and noise. This study addresses the challenge of noise-induced data quality deterioration, which complicates trend analysis and may produce anomalous outliers. To mitigate this issue, we propose a noise removal strategy employing Wavelet Transform, a technique renowned for its efficacy in signal processing and noise elimination. The advantage of Wavelet Transform lies in its operational efficiency - it reduces both time and costs as it obviates the need for acquiring the true values of collected data. This study conducted a comparative performance evaluation between our Wavelet Transform-based approach and the Denoising Autoencoder, a prominent machine learning method for noise reduction.. The findings demonstrate that the Coiflets wavelet function outperforms the Denoising Autoencoder across various metrics, including Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Mean Squared Error (MSE). The superiority of the Coiflets function suggests that selecting an appropriate wavelet function tailored to the specific application environment can effectively address data quality issues caused by noise. This study underscores the potential of Wavelet Transform as a robust tool for enhancing the quality of water level data, thereby contributing to the reliability of water resource management decisions.