• Title/Summary/Keyword: optimal probability density

Search Result 77, Processing Time 0.021 seconds

SHM-based probabilistic representation of wind properties: statistical analysis and bivariate modeling

  • Ye, X.W.;Yuan, L.;Xi, P.S.;Liu, H.
    • Smart Structures and Systems
    • /
    • v.21 no.5
    • /
    • pp.591-600
    • /
    • 2018
  • The probabilistic characterization of wind field characteristics is a significant task for fatigue reliability assessment of long-span railway bridges in wind-prone regions. In consideration of the effect of wind direction, the stochastic properties of wind field should be represented by a bivariate statistical model of wind speed and direction. This paper presents the construction of the bivariate model of wind speed and direction at the site of a railway arch bridge by use of the long-term structural health monitoring (SHM) data. The wind characteristics are derived by analyzing the real-time wind monitoring data, such as the mean wind speed and direction, turbulence intensity, turbulence integral scale, and power spectral density. A sequential quadratic programming (SQP) algorithm-based finite mixture modeling method is proposed to formulate the joint distribution model of wind speed and direction. For the probability density function (PDF) of wind speed, a double-parameter Weibull distribution function is utilized, and a von Mises distribution function is applied to represent the PDF of wind direction. The SQP algorithm with multi-start points is used to estimate the parameters in the bivariate model, namely Weibull-von Mises mixture model. One-year wind monitoring data are selected to validate the effectiveness of the proposed modeling method. The optimal model is jointly evaluated by the Bayesian information criterion (BIC) and coefficient of determination, $R^2$. The obtained results indicate that the proposed SQP algorithm-based finite mixture modeling method can effectively establish the bivariate model of wind speed and direction. The established bivariate model of wind speed and direction will facilitate the wind-induced fatigue reliability assessment of long-span bridges.

Determination of the Optimal Contract Amount of the Hydropower Energy Considering the Reliabilities of Reservoir Inflows (저수지(貯水池) 유입량(流入量)의 신뢰도(信賴度)를 고려한 최적(最適) 계약전력량(契約電力量)의 결정(決定))

  • Kwon, Oh Hun;Yoo, Ju Hwan
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.13 no.2
    • /
    • pp.141-149
    • /
    • 1993
  • Production of hydro-energy is random in its output amount due to the characteristics of the reservoir inflows. Therefore, it is necessary to provide the rationality in determining the amount of energy for a supply contract. This study presents a methodology for determining reasonably reliable amount of the energy supply considering the energy sale-incomes associated with the penalties which are subject to inflow-reliabilities. The objective function consists of the returns of energy sales and the risk-loss function to reflect statistically relevant risks. A range of the coefficient of the risk-loss function was figured out by its sensitivity analysis. The risk-loss herein means the penalty which should be paid by the energy supplier in case that the level of the energy supply is behind the contracted amount. And the reliability of reservoir inflow is defined by the exceedance probability of the inflow. The log-normal distribution was accepted as the probability density function of monthly inflows on the level of significance at 5%. Golden-ratio searching was applied to identify the optimal reliability and Incremental Dynamic Programming was used to maximize generation of the hydro-power energy in reservoir operation. The algorithm was the applied to the Daechung multi-purpose reservoir and hydro-power plant system in order to verify its usefulness.

  • PDF

Polynomially Adjusted Normal Approximation to the Null Distribution of Ansari-Bradley Statistic

  • Ha, Hyung-Tae;Yang, Wan-Youn
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.6
    • /
    • pp.1161-1168
    • /
    • 2011
  • The approximation for the distribution functions of nonparametric test statistics is a significant step in statistical inference. A rank sum test for dispersions proposed by Ansari and Bradley (1960), which is widely used to distinguish the variation between two populations, has been considered as one of the most popular nonparametric statistics. In this paper, the statistical tables for the distribution of the nonparametric Ansari-Bradley statistic is produced by use of polynomially adjusted normal approximation as a semi parametric density approximation technique. Polynomial adjustment can significantly improve approximation precision from normal approximation. The normal-polynomial density approximation for Ansari-Bradley statistic under finite sample sizes is utilized to provide the statistical table for various combination of its sample sizes. In order to find the optimal degree of polynomial adjustment of the proposed technique, the sum of squared probability mass function(PMF) difference between the exact distribution and its approximant is measured. It was observed that the approximation utilizing only two more moments of Ansari-Bradley statistic (in addition to the first two moments for normal approximation provide) more accurate approximations for various combinations of parameters. For instance, four degree polynomially adjusted normal approximant is about 117 times more accurate than normal approximation with respect to the sum of the squared PMF difference.

On the Design of a WiFi Direct 802.11ac WLAN under a TGn MIMO Multipath Fading Channel

  • Khan, Gul Zameen;Gonzalez, Ruben;Park, Eun-Chan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1373-1392
    • /
    • 2017
  • WiFi Direct (WD) is a state of the art technology for a Device-to-Device (D2D) communication in 802.11 networks. The performance of the WD system can be significantly affected by some key factors such as the type of application, specifications of MAC and PHY layer parameters, and surrounding environment etc. It is, therefore, important to develop a system model that takes these factors into account. In this paper, we focus on investigating the design parameters of the PHY layer that could maximize the efficiency of the WD 802.11 system. For this purpose, a basic theoretical model is formulated for a WD network under a 2x2 Multiple In Multiple Out (MIMO) TGn channel B model. The design level parameters such as input symbol rate and antenna spacing, as well as the effects of the environment, are thoroughly examined in terms of path gain, spectral density, outage probability and Packet Error Rate (PER). Thereafter, a novel adaptive algorithm is proposed to choose optimal parameters in accordance with the Quality of Experience (QoE) for a targeted application. The simulation results show that the proposed method outperforms the standard method thereby achieving an optimal performance in an adaptive manner.

Maritime radar display unit based on PC for safe ship navigation

  • Bae, Jin-Ho;Lee, Chong-Hyun;Hwang, Chang-Ku
    • International Journal of Ocean System Engineering
    • /
    • v.1 no.1
    • /
    • pp.52-59
    • /
    • 2011
  • A prototype radar display unit was implemented using inexpensive off-the-shelf components, including a nonlinear estimation algorithm for the target tracking in a clutter environment. Two custom designed boards; an analog signal processing board and a DSP board, can be plugged into an expansion slot of a personal computer (PC) to form a maritime radar display unit. Our system provided all the functionality specified in the International Maritime Organization (IMO) resolution A422(XI). The analog signal processing board was used for A/D conversion as well as rain and sea clutter suppression. The main functions of the DSP board were scan conversion and video overlay operations. A host PC was used to run the tracking algorithm of targets in clutter, using the discrete-time Bayes optimal (nonlinear, and non-Gaussian) estimation method, and the graphic user interface (GUI) software for Automatic Radar Plotting Aid (ARPA). The proposed tracking method recursively found the entire probability density function of the target position and velocity by converting into linear convolution operations.

Lossy Source Compression of Non-Uniform Binary Source via Reinforced Belief Propagation over GQ-LDGM Codes

  • Zheng, Jianping;Bai, Baoming;Li, Ying
    • ETRI Journal
    • /
    • v.32 no.6
    • /
    • pp.972-975
    • /
    • 2010
  • In this letter, we consider the lossy coding of a non-uniform binary source based on GF(q)-quantized low-density generator matrix (LDGM) codes with check degree $d_c$=2. By quantizing the GF(q) LDGM codeword, a non-uniform binary codeword can be obtained, which is suitable for direct quantization of the non-uniform binary source. Encoding is performed by reinforced belief propagation, a variant of belief propagation. Simulation results show that the performance of our method is quite close to the theoretic rate-distortion bounds. For example, when the GF(16)-LDGM code with a rate of 0.4 and block-length of 1,500 is used to compress the non-uniform binary source with probability of 1 being 0.23, the distortion is 0.091, which is very close to the optimal theoretical value of 0.074.

Determination of Noise Threshold from Signal Histogram in the Wavelet Domain

  • Kim, Eunseo;Lee, Kamin;Yang, Sejung;Lee, Byung-Uk
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.156-160
    • /
    • 2014
  • Thresholding in frequency domain is a simple and effective noise reduction technique. Determination of the threshold is critical to the image quality. The optimal threshold minimizing the Mean Square Error (MSE) is chosen adaptively in the wavelet domain; we utilize an equation of the MSE for the soft-thresholded signal and the histogram of wavelet coefficients of the original image and noisy image. The histogram of the original signal is estimated through the deconvolution assuming that the probability density functions (pdfs) of the original signal and the noise are statistically independent. The proposed method is quite general in that it does not assume any prior for the source pdf.

EEG data compression using subband coding techniques (대역 분할 부호화 기법을 이용한 EEG 데이타 압축)

  • Lee, Jong-Ug;Huh, Jae-Man;Kim, Taek-Soo;Park, Sang-Hui
    • Proceedings of the KIEE Conference
    • /
    • 1993.11a
    • /
    • pp.338-341
    • /
    • 1993
  • A EEG(ElectroEncephaloGram) compression scheme based on subband coding techniques is presented in this paper. Considering the frequency characteristics of EEG, the raw signal was decomposed into different frequency bands. After decomposition, optimal bit allocation was done by adapting to the standard deviation in each frequency bands, and decomposed signals were quantized using pdf(probability density function)-optimized nonuniform quantizer. Based on the above mentioned coding scheme, coding results of various multichannel EEG signal were shown with compression ratio and SNR(signal-to-noise ratio).

  • PDF

Gaussian Density Selection Method of CDHMM in Speaker Recognition (화자인식에서 연속밀도 은닉마코프모델의 혼합밀도 결정방법)

  • 서창우;이주헌;임재열;이기용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.8
    • /
    • pp.711-716
    • /
    • 2003
  • This paper proposes the method to select the number of optimal mixtures in each state in Continuous Density HMM (Hidden Markov Models), Previously, researchers used the same number of mixture components in each state of HMM regardless spectral characteristic of speaker, To model each speaker as accurately as possible, we propose to use a different number of mixture components for each state, Selection of mixture components considered the probability value of mixture by each state that affects much parameter estimation of continuous density HMM, Also, we use PCA (principal component analysis) to reduce the correlation and obtain the system' stability when it is reduced the number of mixture components, We experiment it when the proposed method used average 10% small mixture components than the conventional HMM, When experiment result is only applied selection of mixture components, the proposed method could get the similar performance, When we used principal component analysis, the feature vector of the 16 order could get the performance decrease of average 0,35% and the 25 order performance improvement of average 0.65%.

Improving Generalization Performance of Neural Networks using Natural Pruning and Bayesian Selection (자연 프루닝과 베이시안 선택에 의한 신경회로망 일반화 성능 향상)

  • 이현진;박혜영;이일병
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.326-338
    • /
    • 2003
  • The objective of a neural network design and model selection is to construct an optimal network with a good generalization performance. However, training data include noises, and the number of training data is not sufficient, which results in the difference between the true probability distribution and the empirical one. The difference makes the teaming parameters to over-fit only to training data and to deviate from the true distribution of data, which is called the overfitting phenomenon. The overfilled neural network shows good approximations for the training data, but gives bad predictions to untrained new data. As the complexity of the neural network increases, this overfitting phenomenon also becomes more severe. In this paper, by taking statistical viewpoint, we proposed an integrative process for neural network design and model selection method in order to improve generalization performance. At first, by using the natural gradient learning with adaptive regularization, we try to obtain optimal parameters that are not overfilled to training data with fast convergence. By adopting the natural pruning to the obtained optimal parameters, we generate several candidates of network model with different sizes. Finally, we select an optimal model among candidate models based on the Bayesian Information Criteria. Through the computer simulation on benchmark problems, we confirm the generalization and structure optimization performance of the proposed integrative process of teaming and model selection.