• Title/Summary/Keyword: Mean Squared Error, MSE

Search Result 174, Processing Time 0.024 seconds

Performance Degradation Due to Particle Impoverishment in Particle Filtering

  • Lim, Jaechan
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.6
    • /
    • pp.2107-2113
    • /
    • 2014
  • Particle filtering (PF) has shown its outperforming results compared to that of classical Kalman filtering (KF), particularly for highly nonlinear problems. However, PF may not be universally superior to the extended KF (EKF) although the case (i.e. an example that the EKF outperforms PF) is seldom reported in the literature. Particularly, PF approaches show degraded performance for problems where the state noise is very small or zero. This is because particles become identical within a few iterations, which is so called particle impoverishment (PI) phenomenon; consequently, no matter how many particles are employed, we do not have particle diversity regardless of if the impoverished particle is close to the true state value or not. In this paper, we investigate this PI phenomenon, and show an example problem where a classical KF approach outperforms PF approaches in terms of mean squared error (MSE) criterion. Furthermore, we compare the processing speed of the EKF and PF approaches, and show the better speed performance of classical EKF approaches. Therefore, PF approaches may not be always better option than the classical EKF for nonlinear problems. Specifically, we show the outperforming result of unscented Kalman filter compared to that of PF approaches (which are shown in Fig. 7(c) for processing speed performance, and Fig. 6 for MSE performance in the paper).

The performance of SA filters according to the filter order (SA 여파기의 차수에 따른 성능 평가)

  • Song, Jong-Kwan;Yoon, Byung-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.7
    • /
    • pp.1502-1507
    • /
    • 2005
  • The SA filters have a very flexible structure by limiting the maximum subwindow size. This flexible structure presents an effective trade-off between the complexity and performance of the filters. In this paper, experimental results showing the performance variation according to the change of filter order and subfilter type(such as max, min, exclusive-OR, mod) are presented. We designed optimal SA filters minimizing MSE for the various noise conditions. These results show several new properties of SA filters.

A Study on Optimum Subband Filter Bank Design Using Vector Quantizer (벡터 양자화기를 사용한 최적의 부대역 필터 뱅크 구현에 관한 연구)

  • Jee, Innho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.1
    • /
    • pp.107-113
    • /
    • 2017
  • This paper provides a new approach for modeling of vector quantizer(VQ) followed by analysis and design of subband codecs with imbedded VQ's. We compute the mean squared reconstruction error(MSE) which depend on N the number of entries in each codebook, k the length of each codeword, and on the filter bank(FB) coefficients in subband codecs. We show that the optimum M-band filter bank structure in presence of pdf-optimized vector quantizer can be designed by a suitable choice of equivalent scalar quantizer parameters. Specific design examples have been developed for two different classes of filter banks, paraunitary and the biorthogonal FB and the 2 channel case. These theoretical results are confirmed by Monte Carlo simulation.

A study on the sequential algorithm for simultaneous estimation of TDOA and FDOA (TDOA/FDOA 동시 추정을 위한 순차적 알고리즘에 관한 연구)

  • 김창성;김중규
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.7
    • /
    • pp.72-85
    • /
    • 1998
  • In this paper, we propose a new method that sequentially estimates TDOA(Time Delay Of Arrival) and FDOA(Frequency Delay Of Arrival) for extracting the information about the bearing and relative velocity of a target in passive radar or sonar arrays. The objective is to efficiently estimate the TDOA and FDOA between two sensor signal measurements, corrupted by correlated Gaussian noise sources in an unknown way. The proposed method utilizes the one dimensional slice function of the third order cumulants between the two sensor measurements, by which the effect of correlated Gaussian measurement noises can be significantly suppressed for the estimation of TDOA. Because the proposed sequential algoritjhm uses the one dimensional complex ambiguity function based on the TDOA estimate from the first step, the amount of computations needed for accurate estimationof FDOA can be dramatically reduced, especially for the cases where high frequency resolution is required. It is demonstrated that the proposed algorithm outperforms existing TDOA/FDOA estimation algorithms based on the ML(maximum likelihood) criterionandthe complex ambiguity function of the third order cumulant as well, in the MSE(mean squared error) sense and computational burden. Various numerical resutls on the detection probability, MSE and the floatingpoint computational burden are presented via Monte-Carlo simulations for different types of noises, different lengths of data, and different signal-to-noise ratios.

  • PDF

Optimal Bayesian MCMC based fire brigade non-suppression probability model considering uncertainty of parameters

  • Kim, Sunghyun;Lee, Sungsu
    • Nuclear Engineering and Technology
    • /
    • v.54 no.8
    • /
    • pp.2941-2959
    • /
    • 2022
  • The fire brigade non-suppression probability model is a major factor that should be considered in evaluating fire-induced risk through fire probabilistic risk assessment (PRA), and also uncertainty is a critical consideration in support of risk-informed performance-based (RIPB) fire protection decision-making. This study developed an optimal integrated probabilistic fire brigade non-suppression model considering uncertainty of parameters based on the Bayesian Markov Chain Monte Carlo (MCMC) approach on electrical fire which is one of the most risk significant contributors. The result shows that the log-normal probability model with a location parameter (µ) of 2.063 and a scale parameter (σ) of 1.879 is best fitting to the actual fire experience data. It gives optimal model adequacy performance with Bayesian information criterion (BIC) of -1601.766, residual sum of squares (RSS) of 2.51E-04, and mean squared error (MSE) of 2.08E-06. This optimal log-normal model shows the better performance of the model adequacy than the exponential probability model suggested in the current fire PRA methodology, with a decrease of 17.3% in BIC, 85.3% in RSS, and 85.3% in MSE. The outcomes of this study are expected to contribute to the improvement and securement of fire PRA realism in the support of decision-making for RIPB fire protection programs.

A SEM-ANN Two-step Approach for Predicting Determinants of Cloud Service Use Intention (SEM-Artificial Neural Network 2단계 접근법에 의한 클라우드 스토리지 서비스 이용의도 영향요인에 관한 연구)

  • Guangbo Jiang;Sundong Kwon
    • Journal of Information Technology Applications and Management
    • /
    • v.30 no.6
    • /
    • pp.91-111
    • /
    • 2023
  • This study aims to identify the influencing factors of intention to use cloud services using the SEM-ANN two-step approach. In previous studies of SEM-ANN, SEM presented R2 and ANN presented MSE(mean squared error), so analysis performance could not be compared. In this study, R2 and MSE were calculated and presented by SEM and ANN, respectively. Then, analysis performance was compared and feature importances were compared by sensitivity analysis. As a result, the ANN default model improved R2 by 2.87 compared to the PLS model, showing a small Cohen's effect size. The ANN optimization model improved R2 by 7.86 compared to the PLS model, showing a medium Cohen effect size. In normalized feature importances, the order of importances was the same for PLS and ANN. The contribution of this study, which links structural equation modeling to artificial intelligence, is that it verified the effect of improving the explanatory power of the research model while maintaining the order of importance of independent variables.

A Novel Broadband Channel Estimation Technique Based on Dual-Module QGAN

  • Li Ting;Zhang Jinbiao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.5
    • /
    • pp.1369-1389
    • /
    • 2024
  • In the era of 6G, the rapid increase in communication data volume poses higher demands on traditional channel estimation techniques and those based on deep learning, especially when processing large-scale data as their computational load and real-time performance often fail to meet practical requirements. To overcome this bottleneck, this paper introduces quantum computing techniques, exploring for the first time the application of Quantum Generative Adversarial Networks (QGAN) to broadband channel estimation challenges. Although generative adversarial technology has been applied to channel estimation, obtaining instantaneous channel information remains a significant challenge. To address the issue of instantaneous channel estimation, this paper proposes an innovative QGAN with a dual-module design in the generator. The adversarial loss function and the Mean Squared Error (MSE) loss function are separately applied for the parameter updates of these two modules, facilitating the learning of statistical channel information and the generation of instantaneous channel details. Experimental results demonstrate the efficiency and accuracy of the proposed dual-module QGAN technique in channel estimation on the Pennylane quantum computing simulation platform. This research opens a new direction for physical layer techniques in wireless communication and offers expanded possibilities for the future development of wireless communication technologies.

Research on Insurance Claim Prediction Using Ensemble Learning-Based Dynamic Weighted Allocation Model (앙상블 러닝 기반 동적 가중치 할당 모델을 통한 보험금 예측 인공지능 연구)

  • Jong-Seok Choi
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.221-228
    • /
    • 2024
  • Predicting insurance claims is a key task for insurance companies to manage risks and maintain financial stability. Accurate insurance claim predictions enable insurers to set appropriate premiums, reduce unexpected losses, and improve the quality of customer service. This study aims to enhance the performance of insurance claim prediction models by applying ensemble learning techniques. The predictive performance of models such as Random Forest, Gradient Boosting Machine (GBM), XGBoost, Stacking, and the proposed Dynamic Weighted Ensemble (DWE) model were compared and analyzed. Model performance was evaluated using Mean Absolute Error (MAE), Mean Squared Error (MSE), and the Coefficient of Determination (R2). Experimental results showed that the DWE model outperformed others in terms of evaluation metrics, achieving optimal predictive performance by combining the prediction results of Random Forest, XGBoost, LR, and LightGBM. This study demonstrates that ensemble learning techniques are effective in improving the accuracy of insurance claim predictions and suggests the potential utilization of AI-based predictive models in the insurance industry.

PDF-Distance Minimizing Blind Algorithm based on Delta Functions for Compensation for Complex-Channel Phase Distortions (복소 채널의 위상 왜곡 보상을 위한 델타함수 기반의 확률분포거리 최소화 블라인드 알고리듬)

  • Kim, Nam-Yong;Kang, Sung-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.12
    • /
    • pp.5036-5041
    • /
    • 2010
  • This paper introduces the complex-version of an Euclidean distance minimization algorithm based on a set of delta functions. The algorithm is analyzed to be able to compensate inherently the channel phase distortion caused by inferior complex channels. Also this algorithm has a relatively small size of Gaussian kernel compared to the conventional method of using a randomly generated symbol set. This characteristic implies that the information potential between desired symbol and output is higher so that the algorithm forces output more strongly to gather close to the desired symbol. Based on 16 QAM system and phase distorted complex-channel models, mean squared error (MSE) performance and concentration performance of output symbol points are evaluated. Simulation results show that the algorithm compensates channel phase distortion effectively in constellation performance and about 5 dB enhancement in steady state MSE performance.

Estimation of conditional mean residual life function with random censored data (임의중단자료에서의 조건부 평균잔여수명함수 추정)

  • Lee, Won-Kee;Song, Myung-Unn;Jeong, Seong-Hwa
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.1
    • /
    • pp.89-97
    • /
    • 2011
  • The aims of this study were to propose a method of estimation for mean residual life function (MRLF) from conditional survival function using the Buckley and James's (1979) pseudo random variables, and then to assess the performance of the proposed method through the simulation studies. The mean squared error (MSE) of proposed method were less than those of the Cox's proportional hazard model (PHM) and Beran's nonparametric method for non-PHM case. Futhermore in the case of PHM, the MSE's of proposed method were similar to those of Cox's PHM. Finally, to evaluate the appropriateness of practical use, we applied the proposed method to the gastric cancer data. The data set consist of the 1, 192 patients with gastric cancer underwent surgery at the Department of Surgery, K-University Hospital.