• Title/Summary/Keyword: Likelihood measure

Search Result 184, Processing Time 0.026 seconds

Code acquisition and demodulation performance of the RAKE receiver in the DS/CDMA mobile communication systems (DS/CDMA 이동통신 시스템에서 RAKE 수신기의 코드동기 및 복조 성능분석)

  • 이한섭
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.1
    • /
    • pp.104-115
    • /
    • 1997
  • This paper investigates PN code acquistion algorithm and demodulation performance of the RAKE receiver in the DS/CDMA(direct sequence code division multiple access) sysytems under a multipath fading channel with multiusers. To speed up the acquisition process, PN matched filter is applied and postdetection integration technique comable withthe dynamic threshold set method is proposed. the Maximum-Likelihood algorithmin serial fashion is able to find PN code delay estimates for the RAKE branches using sliding window in a multipath fading channel. The correct acquistion probability and mean acquistion time are used as a performance measure of the system using the Monte Carlo method. The performance of the RAKEreceiver, afte the code acquisition is achieved is the CDMA systems, is also investigated for three major combining techniques.

  • PDF

Comparison Density Representation of Traditional Test Statistics for the Equality of Two Population Proportions

  • Jangsun Baek
    • Communications for Statistical Applications and Methods
    • /
    • v.2 no.1
    • /
    • pp.112-121
    • /
    • 1995
  • Let $p_1$ and $p_2$ be the proportions of two populations. To test the hypothesis $H_0 : p_1 = p_2$, we usually use the $x^2$ statistic, the large sample binomial statistic Z, and the Generalized Likelihood Ratio statistic-2log $\lambda$developed based on different mathematical rationale, respectively. Since testing the above hypothesis is equivalent to testing whether two populations follow the common Bernoulli distribution, one may also test the hypothesis by comparing 1 with the ratio of each density estimate and the hypothesized common density estimate, called comparison density, which was devised by Parzen(1988). We show that the above traditional test statistics ate actually estimating the measure of distance between the true densities and the common density under $H_0$ by representing them with the comparison density.

  • PDF

Posterior density estimation of Kappa via Gibbs sampler in the beta-binomial model (베타-이항 분포에서 Gibbs sampler를 이용한 평가 일치도의 사후 분포 추정)

  • 엄종석;최일수;안윤기
    • The Korean Journal of Applied Statistics
    • /
    • v.7 no.2
    • /
    • pp.9-19
    • /
    • 1994
  • Beta-binomial model, which is reparametrized in terms of the mean probability $\mu$ of a positive deagnosis and the $\kappa$ of agreement, is widely used in psychology. When $\mu$ is close to 0, inference about $\kappa$ become difficult because likelihood function becomes constant. We consider Bayesian approach in this case. To apply Bayesian analysis, Gibbs sampler is used to overcome difficulties in integration. Marginal posterior density functions are estimated and Bayesian estimates are derived by using Gibbs sampler and compare the results with the one obtained by using numerical integration.

  • PDF

Empirical Comparisons of Disparity Measures for Partial Association Models in Three Dimensional Contingency Tables

  • Jeong, D.B.;Hong, C.S.;Yoon, S.H.
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.1
    • /
    • pp.135-144
    • /
    • 2003
  • This work is concerned with comparison of the recently developed disparity measures for the partial association model in three dimensional categorical data. Data are generated by using simulation on each term in the log-linear model equation based on the partial association model, which is a proposed method in this paper. This alternative Monte Carlo methods are explored to study the behavior of disparity measures such as the power divergence statistic I(λ), the Pearson chi-square statistic X$^2$, the likelihood ratio statistic G$^2$, the blended weight chi-square statistic BWCS(λ), the blended weight Hellinger distance statistic BWHD(λ), and the negative exponential disparity statistic NED(λ) for moderate sample sizes. We find that the power divergence statistic I(2/3) and the blended weight Hellinger distance family BWHD(1/9) are the best tests with respect to size and power.

Using Bayesian Estimation Technique to Analyze a Dichotomous Choice Contingent Valuation Data (베이지안 추정법을 이용한 양분선택형 조건부 가치측정모형의 분석)

  • Yoo, Seung-Hoon
    • Environmental and Resource Economics Review
    • /
    • v.11 no.1
    • /
    • pp.99-119
    • /
    • 2002
  • As an alternative to classical maximum likelihood approach for analyzing dichotomous choice contingent valuation (DCCV) data, this paper develops a Bayesian approach. By using the idea of Gibbs sampling and data augmentation, the approach enables one to perform exact inference for DCCV models. A by-product from the approach is welfare measure, such as the mean willingness to pay, and its confidence interval, which can be used for policy analysis. The efficacy of the approach relative to the classical approach is discussed in the context of empirical DCCV studies. It is concluded that there appears to be considerable scope for the use of the Bayesian analysis in dealing with DCCV data.

  • PDF

A Low Rate VQ Speech Coding Algorithm with Variable Transmission Frame Length (가변 전송 Frame 길이를 갖는 저 전송속도 VQ 음성부호화 알고리즘에 대한 연구)

  • 좌정우;이성로;이황수
    • The Journal of the Acoustical Society of Korea
    • /
    • v.12 no.1E
    • /
    • pp.32-38
    • /
    • 1993
  • 본 논문에서는 저 전송속도의 음성 부호화기를 제안하였고 컴퓨터 시뮬레이션을 통하여 성능분석과 유연성을 입증하였다. 제안된 부호화 방식은 입력 음성신호의 Stationarity에 따라 전송 프레임의 길이를 가변하고, 전송 프레임의 대표적인 특징 벡터를 Vector Quatization으로 부호화하였다. 제안된 부호화 방식에서 특징 벡터열은 입력 음성신호를 샘플단위로 Prewindowed RLS Lattice 알고리즘을 통해 구한 PARCOR 계수로 구성된다. 입력 음성신호는 Subsegment로 분할되고, 각 Subsegment에서 대표적인 PARCOR 계수를 구한다. Likelihood Ratio Distortion Measure를 사용하여 유사도에 따라 Subsegment를 병합함으로써 전송프레임을 결정한다. 컴퓨터 시뮬레이션 결과로부터 제안된 VTEL 음성 부호화 방식은 좋은 음질을 유지하면서 전체 전송속도를 크게 줄일 수 있다.

  • PDF

Performance Improvement of Microphone Array Speech Recognition Using Features Weighted Mahalanobis Distance (가중특징 Mahalanobis거리를 이용한 마이크 어레이 음석인식의 성능향상)

  • Nguyen, Dinh Cuong;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.1E
    • /
    • pp.45-53
    • /
    • 2010
  • In this paper, we present the use of the Features Weighted Mahalanobis Distance (FWMD) in improving the performance of Likelihood Maximizing Beamforming (Limabeam) algorithm in speech recognition for microphone array. The proposed approach is based on the replacement of the traditional distance measure in a Gaussian classifier with adding weight for different features in the Mahalanobis distance according to their distances after the variance normalization. By using Features Weighted Mahalanobis Distance for Limabeam algorithm (FWMD-Limabeam), we obtained correct word recognition rate of 90.26% for calibrate Limabeam and 87.23% for unsupervised Limabeam, resulting in a higher rate of 3% and 6% respectively than those produced by the original Limabearn. By implementing a HM-Net speech recognition strategy alternatively, we could save memory and reduce computation complexity.

A Modified FCM for Nonlinear Blind Channel Equalization using RBF Networks

  • Han, Soo-Whan
    • Journal of information and communication convergence engineering
    • /
    • v.5 no.1
    • /
    • pp.35-41
    • /
    • 2007
  • In this paper, a modified Fuzzy C-Means (MFCM) algorithm is presented for nonlinear blind channel equalization. The proposed MFCM searches the optimal channel output states of a nonlinear channel, based on the Bayesian likelihood fitness function instead of a conventional Euclidean distance measure. In its searching procedure, all of the possible desired channel states are constructed with the elements of estimated channel output states. The desired state with the maximum Bayesian fitness is selected and placed at the center of a Radial Basis Function (RBF) equalizer to reconstruct transmitted symbols. In the simulations, binary signals are generated at random with Gaussian noise. The performance of the proposed method is compared with that of a hybrid genetic algorithm (GA merged with simulated annealing (SA): GASA), and the relatively high accuracy and fast searching speed are achieved.

Identification of FSK Radar Modulation (FSK 변조 레이더 신호 인식 기술)

  • Lim, Ha-Young;You, Kyung-Jin;Shin, Hyun-Chool
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.2
    • /
    • pp.425-430
    • /
    • 2017
  • This paper presents a novel method for identification of FSK modulated radar signal. Three features which measure the number of frequency tones, the regularity of the frequency shifting, and the diversity of power spectrum of detected radar signal, are introduced. A Two-step combined maximum likelihood classifier was used to identify the details of the detected FSK signal; the modulation order and the use of Costas code. We attempted to divide FSK signal into binary FSK, ternary FSK, 8-ary FSK, and FSK with Costas code of length 7. The simulation results indicated that the proposed methods achieves an averaged identification accuracy was 99.93% at a signal-to-noise of 0 dB.

Resampling-based Test of Hypothesis in L1-Regression

  • Kim, Bu-Yong
    • Communications for Statistical Applications and Methods
    • /
    • v.11 no.3
    • /
    • pp.643-655
    • /
    • 2004
  • L$_1$-estimator in the linear regression model is widely recognized to have superior robustness in the presence of vertical outliers. While the L$_1$-estimation procedures and algorithms have been developed quite well, less progress has been made with the hypothesis test in the multiple L$_1$-regression. This article suggests computer-intensive resampling approaches, jackknife and bootstrap methods, to estimating the variance of L$_1$-estimator and the scale parameter that are required to compute the test statistics. Monte Carlo simulation studies are performed to measure the power of tests in small samples. The simulation results indicate that bootstrap estimation method is the most powerful one when it is employed to the likelihood ratio test.