• Title/Summary/Keyword: Likelihood Ratio

Search Result 722, Processing Time 0.028 seconds

Control charts for monitoring correlation coefficients in variance-covariance matrix

  • Chang, Duk-Joon;Heo, Sun-Yeong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.4
    • /
    • pp.803-809
    • /
    • 2011
  • Properties of multivariate Shewhart and CUSUM charts for monitoring variance-covariance matrix, specially focused on correlation coefficient components, are investigated. The performances of the proposed charts based on control statistic Lawley-Hotelling $V_i$ and likelihood ratio test (LRT) statistic $TV_i$ are evaluated in terms of average run length (ARL). For monitoring correlation coe cient components of dispersion matrix, we found that CUSUM chart based on $TV_i$ gives relatively better performances and is more preferable, and the charts based on $V_i$ perform badly and are not recommended.

Fuzzy Test of Hypothesis by Uniformly Most Powerful Test (균일최강력검정에 의한 가설의 퍼지 검정)

  • Kang, Man-Ki
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.1
    • /
    • pp.25-28
    • /
    • 2011
  • In this paper, we study some properties of condition for fuzzy data, agrement index by ratio of area and the uniformly most powerful fuzzy test of hypothesis. Also, we suggest a confidence bound for uniformly most powerful fuzzy test. For illustration, we take the most powerful critical fuzzy region from exponential distribution by likelihood ratio and test the hypothesis of ${\chi}^2$-distribution by agreement index.

Fault Detection and Isolation of Integrated SDINS/GPS System Using the Generalized Likelihood Ratio (일반공산비 기법을 이용한 SDINS/GPS 통합시스템의 고장 검출 및 격리)

  • Shin, Jeong-Hoon;Lim, You-Chol;Lyou, Joon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.3 no.2
    • /
    • pp.140-148
    • /
    • 2000
  • This paper presents a fault detection and isolation(FDI) method based on Generalized Likelihood Ratio(GLR) test for the tightly coupled SDINS/CPS system. The GLR test is known to have the capability of detecting an assumed change while estimating its occurrence time and magnitude, and isolating the changing part. Once a fault is detected even if we don't know if the fault occurrs at either INS or GPS, multi-hypothesized GLR scheme performs the fault isolation between INS and GPS, and find which satellite malfunctions. However, in the INS faulty case, it turned out to fail to accomodate the fault isolation between accelerometer and gyroscope due to the coupling effects and a poor observability of the system. Hence, to isolate the INS fault, it needs to change the attitude of the vehicle resulting in enhancing the degree of observability.

  • PDF

General Log-Likelihood Ratio Expression and Its Implementation Algorithm for Gray-Coded QAM Signals

  • Kim, Ki-Seol;Hyun, Kwang-Min;Yu, Chang-Wahn;Park, Youn-Ok;Yoon, Dong-Weon;Park, Sang-Kyu
    • ETRI Journal
    • /
    • v.28 no.3
    • /
    • pp.291-300
    • /
    • 2006
  • A simple and general bit log-likelihood ratio (LLR) expression is provided for Gray-coded rectangular quadrature amplitude modulation (R-QAM) signals. The characteristics of Gray code mapping such as symmetries and repeated formats of the bit assignment in a symbol among bit groups are applied effectively for the simplification of the LLR expression. In order to reduce the complexity of the max-log-MAP algorithm for LLR calculation, we replace the mathematical max or min function of the conventional LLR expression with simple arithmetic functions. In addition, we propose an implementation algorithm of this expression. Because the proposed expression is very simple and constructive with some parameters reflecting the characteristic of the Gray code mapping result, it can easily be implemented, providing an efficient symbol de-mapping structure for various wireless applications.

  • PDF

Testing of Poisson Incidence Rate Restriction

  • Singh, Karan;Shanmugam, Ramalingam
    • International Journal of Reliability and Applications
    • /
    • v.2 no.4
    • /
    • pp.263-268
    • /
    • 2001
  • Shanmugam(1991) generalized the Poisson distribution to capture a restriction on the incidence rate $\theta$ (i.e. $\theta$$\beta$, an unknown upper limit), and named it incidence rate restricted Poisson (IRRP) distribution. Using Neyman's C($\alpha$) concept, Shanmugam then devised a hypothesis testing procedure for $\beta$ when $\theta$ remains unknown nuisance parameter. Shanmugam's C ($\alpha$) based .results involve inverse moments which are not easy tools, This article presents an alternate testing procedure based on likelihood ratio concept. It turns out that likelihood ratio test statistic offers more power than the C($\alpha$) test statistic. Numerical examples are included.

  • PDF

GIS 공간분석기술을 이용한 산불취약지역 분석

  • 한종규;연영광;지광훈
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2002.03b
    • /
    • pp.49-59
    • /
    • 2002
  • 이 연구에서는 강원도 삼척시를 대상으로 산불취약지역 분석모델을 개발하고, 개발된 분석모델을 기반으로 산불취약지역을 표출하였으며, 이를 위한 전산프로그램을 개발하였다. 산불취약지역 공간분석자료로는 NGIS 사업을 통해 구축된 1/25천 축척의 수치지형도, 수치임상도 그리고 과거 산불발화위치자료를 사용하였다. 산불발화위치에 대한 공간적 분포특성(지형, 임상, 접근성)을 기반으로 모델을 설정하였으며, 공간분석은 간단하면서도 일반인들이 이해하기 쉬운 Conditional probability, Likelihood ratio 방법을 사용하였다. 그리고 각각의 모델에 대한 검증(cross validation)을 실시하였다. 모델 검증방법으로는 과거 산불발화위치자료를 발생시기에 따라 두 개의 그룹으로 나누어 하나는 예측을 위한 자료로 사용하고, 다른 하나는 검증을 위한 자료로 사용하였다. 모델별 예측성능은 prediction rate curve를 비교·분석하여 판단하였다. 삼척시를 대상으로 한 예측성능에서 Likelihood ratio 모델이 Conditional probability 모델보다 더 낳은 결과를 보였다. 산불취약지역 분석기술로 작성된 상세 산불취약지역지도와 현재 산림청에서 예보하고 있는 전국단위의 산불발생위험지수와 함께 상호보완적으로 사용한다면 산불취약지역에 대한 산불감시인력 및 감시시설의 효율적인 배치를 통하여 일선 시군 또는 읍면 산불예방업무의 효율성이 한층 더 증대될 것으로 기대된다.

  • PDF

Mutual Information and Redundancy for Categorical Data

  • Hong, Chong-Sun;Kim, Beom-Jun
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.2
    • /
    • pp.297-307
    • /
    • 2006
  • Most methods for describing the relationship among random variables require specific probability distributions and some assumptions of random variables. The mutual information based on the entropy to measure the dependency among random variables does not need any specific assumptions. And the redundancy which is a analogous version of the mutual information was also proposed. In this paper, the redundancy and mutual information are explored to multi-dimensional categorical data. It is found that the redundancy for categorical data could be expressed as the function of the generalized likelihood ratio statistic under several kinds of independent log-linear models, so that the redundancy could also be used to analyze contingency tables. Whereas the generalized likelihood ratio statistic to test the goodness-of-fit of the log-linear models is sensitive to the sample size, the redundancy for categorical data does not depend on sample size but its cell probabilities itself.

Probabilistic Landslide Susceptibility Analysis and Verification using GIS and Remote Sensing Data at Penang, Malaysia

  • Lee, S.;Choi, J.;Talib, En. Jasmi Ab
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.129-131
    • /
    • 2003
  • The aim of this study is to evaluate the hazard of landslides at Penang, Malaysia, using a Geographic Information System (GIS) and remote sensing. Landslide locations were identified in the study area from interpretation of aerial photographs and field surveys. The topographic and geologic data and satellite image were collected, processed and constructed into a spatial database using GIS and image processing. The used factors that influence landslide occurrence are topographic slope, topographic aspect topographic curv ature and distance from drainage from topographic database, geology and distance from lineament from the geologic database, land use from TM satellite image and vegetation index value from SPOT satellite image. Landslide hazardous area were analysed and mapped using the landslide-occurrence factors by probability - likelihood ratio - method. The results of the analysis were verified using the landslide location data. The validation results showed satisfactory agreement between the hazard map and the existing data on landslide location.

  • PDF

Genetic classification of various familial relationships using the stacking ensemble machine learning approaches

  • Su Jin Jeong;Hyo-Jung Lee;Soong Deok Lee;Ji Eun Park;Jae Won Lee
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.3
    • /
    • pp.279-289
    • /
    • 2024
  • Familial searching is a useful technique in a forensic investigation. Using genetic information, it is possible to identify individuals, determine familial relationships, and obtain racial/ethnic information. The total number of shared alleles (TNSA) and likelihood ratio (LR) methods have traditionally been used, and novel data-mining classification methods have recently been applied here as well. However, it is difficult to apply these methods to identify familial relationships above the third degree (e.g., uncle-nephew and first cousins). Therefore, we propose to apply a stacking ensemble machine learning algorithm to improve the accuracy of familial relationship identification. Using real data analysis, we obtain superior relationship identification results when applying meta-classifiers with a stacking algorithm rather than applying traditional TNSA or LR methods and data mining techniques.

An importance sampling for a function of a multivariate random variable

  • Jae-Yeol Park;Hee-Geon Kang;Sunggon Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.1
    • /
    • pp.65-85
    • /
    • 2024
  • The tail probability of a function of a multivariate random variable is not easy to estimate by the crude Monte Carlo simulation. When the occurrence of the function value over a threshold is rare, the accurate estimation of the corresponding probability requires a huge number of samples. When the explicit form of the cumulative distribution function of each component of the variable is known, the inverse transform likelihood ratio method is directly applicable scheme to estimate the tail probability efficiently. The method is a type of the importance sampling and its efficiency depends on the selection of the importance sampling distribution. When the cumulative distribution of the multivariate random variable is represented by a copula and its marginal distributions, we develop an iterative algorithm to find the optimal importance sampling distribution, and show the convergence of the algorithm. The performance of the proposed scheme is compared with the crude Monte Carlo simulation numerically.