• Title/Summary/Keyword: 로그 우도 비

Search Result 32, Processing Time 0.022 seconds

Likelihood based inference for the ratio of parameters in two Maxwell distributions (두 개의 맥스웰분포의 모수비에 대한 우도함수 추론)

  • Kang, Sang-Gil;Lee, Jeong-Hee;Lee, Woo-Dong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.1
    • /
    • pp.89-98
    • /
    • 2012
  • In this paper, the ratio of parameters in two independent Maxwell distributions is parameter of interest. We proposed test statistics, which converge to standard normal distribution, based on likelihood function. The exact distribution for testing the ratio is hard to obtain. We proposed the signed log-likelihood ratio statistic and the modified signed log-likelihood ratio statistic for testing the ratio. Through simulation, we show that the modified signed log-likelihood ratio statistic converges faster than signed log-likelihood ratio statistic to standard normal distribution. We compare two statistics in terms of type I error and power. We give an example using real data.

Screening and Clustering for Time-course Yeast Microarray Gene Expression Data using Gaussian Process Regression (효모 마이크로어레이 유전자 발현데이터에 대한 가우시안 과정 회귀를 이용한 유전자 선별 및 군집화)

  • Kim, Jaehee;Kim, Taehoun
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.3
    • /
    • pp.389-399
    • /
    • 2013
  • This article introduces Gaussian process regression and shows its application with time-course microarray gene expression data. Gene screening for yeast cell cycle microarray expression data is accomplished with a ratio of log marginal likelihood that uses Gaussian process regression with a squared exponential covariance kernel function. Gaussian process regression fitting with each gene is done and shown with the nine top ranking genes. With the screened data the Gaussian model-based clustering is done and its silhouette values are calculated for cluster validity.

Heart Sound-Based Cardiac Disorder Classifiers Using an SVM to Combine HMM and Murmur Scores (SVM을 이용하여 HMM과 심잡음 점수를 결합한 심음 기반 심장질환 분류기)

  • Kwak, Chul;Kwon, Oh-Wook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.3
    • /
    • pp.149-157
    • /
    • 2011
  • In this paper, we propose a new cardiac disorder classification method using an support vector machine (SVM) to combine hidden Markov model (HMM) and murmur existence information. Using cepstral features and the HMM Viterbi algorithm, we segment input heart sound signals into HMM states for each cardiac disorder model and compute log-likelihood (score) for every state in the model. To exploit the temporal position characteristics of murmur signals, we divide the input signals into two subbands and compute murmur probability of every subband of each frame, and obtain the murmur score for each state by using the state segmentation information obtained from the Viterbi algorithm. With an input vector containing the HMM state scores and the murmur scores for all cardiac disorder models, SVM finally decides the cardiac disorder category. In cardiac disorder classification experimental results, the proposed method shows the relatively improvement rate of 20.4 % compared to the HMM-based classifier with the conventional cepstral features.

An Efficient Logging Scheme based on Dynamic Block Allocation for Flash Memory-based DBMS (플래시 메모리 기반의 DBMS를 위한 동적 블록 할당에 기반한 효율적인 로깅 방법)

  • Ha, Ji-Hoon;Lee, Ki-Yong;Kim, Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.36 no.5
    • /
    • pp.374-385
    • /
    • 2009
  • Flash memory becomes increasingly popular as data storage for various devices because of its versatile features such as non-volatility, light weight, low power consumption, and shock resistance. Flash memory, however, has some distinct characteristics that make today's disk-based database technology unsuitable, such as no in-place update and the asymmetric speed of read and write operations. As a result, most traditional disk-based database systems may not provide the best attainable performance on flash memory. To maximize the database performance on flash memory, some approaches have been proposed where only the changes made to the database, i.e., logs, are written to another empty place that has born erased in advance. In this paper, we propose an efficient log management scheme for flash-based database systems. Unlike the previous approaches, the proposed approach stores logs in specially allocated blocks, called log blocks. By evenly distributing logs across log blocks, the proposed approach can significantly reduce the number of write and erase operations. Our performance evaluation shows that the proposed approaches can improve the overall system performance by reducing the number of write and erase operation compared to the previous ones.

Threshold estimation for the composite lognormal-GPD models (로그-정규분포와 파레토 합성 분포의 임계점 추정)

  • Kim, Bobae;Noh, Jisuk;Baek, Changryong
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.5
    • /
    • pp.807-822
    • /
    • 2016
  • The composite lognormal-GPD models (LN-GPD) enjoys both merits from log-normality for the body of distribution and GPD for the thick tailedness of the observation. However, in the estimation perspective, LN-GPD model performs poorly due to numerical instability. Therefore, a two-stage procedure, that estimates threshold first then estimates other parameters later, is a natural method to consider. This paper considers five nonparametric threshold estimation methods widely used in extreme value theory and compares their performance in LN-GPD parameter estimation. A simulation study reveals that simultaneous maximum likelihood estimation performs good in threshold estimation, but very poor in tail index estimation. However, the nonparametric method performs good in tail index estimation, but introduced bias in threshold estimation. Our method is illustrated to the service time of an Israel bank call center and shows that the LN-GPD model fits better than LN or GPD model alone.

Design and Implementation of Web Attack Detection System Based on Integrated Web Audit Data (통합 이벤트 로그 기반 웹 공격 탐지 시스템 설계 및 구현)

  • Lee, Hyung-Woo
    • Journal of Internet Computing and Services
    • /
    • v.11 no.6
    • /
    • pp.73-86
    • /
    • 2010
  • In proportion to the rapid increase in the number of Web users, web attack techniques are also getting more sophisticated. Therefore, we need not only to detect Web attack based on the log analysis but also to extract web attack events from audit information such as Web firewall, Web IDS and system logs for detecting abnormal Web behaviors. In this paper, web attack detection system was designed and implemented based on integrated web audit data for detecting diverse web attack by generating integrated log information generated from W3C form of IIS log and web firewall/IDS log. The proposed system analyzes multiple web sessions and determines its correlation between the sessions and web attack efficiently. Therefore, proposed system has advantages on extracting the latest web attack events efficiently by designing and implementing the multiple web session and log correlation analysis actively.

CUSUM charts for monitoring type I right-censored lognormal lifetime data (제1형 우측중도절단된 로그정규 수명 자료를 모니터링하는 누적합 관리도)

  • Choi, Minjae;Lee, Jaeheon
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.5
    • /
    • pp.735-744
    • /
    • 2021
  • Maintaining the lifetime of a product is one of the objectives of quality control. In real processes, most samples are constructed with censored data because, in many situations, we cannot measure the lifetime of all samples due to time or cost problems. In this paper, we propose two cumulative sum (CUSUM) control charting procedures to monitor the mean of type I right-censored lognormal lifetime data. One of them is based on the likelihood ratio, and the other is based on the binomial distribution. Through simulations, we evaluate the performance of the two proposed procedures by comparing the average run length (ARL). The overall performance of the likelihood ratio CUSUM chart is better, especially this chart performs better when the censoring rate is low and the shape parameter value is small. Conversely, the binomial CUSUM chart is shown to perform better when the censoring rate is high, the shape parameter value is large, and the change in the mean is small.

A Content Site Management Model by Analyzing User Behavior Patterns (사용자 행동 패턴 분석을 이용한 규칙 기반의 컨텐츠 사이트 관리 모델)

  • 김정민;김영자;옥수호;문현정;우용태
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04a
    • /
    • pp.539-541
    • /
    • 2003
  • 본 논문에서는 컨텐츠 사이트에서 디지털 컨텐츠를 보호하기 위하여 사용자 행동 패턴을 분석을 이용해 특이한 성향을 보이는 사용자를 탐지하기 위한 모델을 제시하였다. 사용자의 행동 패턴을 분석하기 위한 탐지 규칙(detection rule)으로 Syntactic Rule과 Semantic Rule을 정의하였다. 사용자 로그 분석 결과 탐지 규칙에 대한 위반 정도가 일정 범위를 벗어나는 사용자를 비정상적인 사용자로 추정하였다. 또한 제안 모델은 eCRM 시스템에서 이탈 가능성이 있는 고객 집단을 사전에 탐지하여 고객으로 유지하기 위한 promotion 전략 수립에 응용될 수 있다.

  • PDF

Performance Analysis Based On Log-Likelihood Ratio in Orthogonal Code Hopping Multiplexing Systems Using Multiple Antennas (다중 안테나를 사용한 직교 부호 도약 다중화 시스템에서 로그 우도비 기반 성능 분석)

  • Jung, Bang-Chul;Sung, Kil-Young;Shin, Won-Yong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.12
    • /
    • pp.2534-2542
    • /
    • 2011
  • In this paper, we show that performance can be improved by using multiple antennas in the conventional orthogonal code hopping multiplexing (OCHM) scheme, which was proposed for accommodating a larger number of users with low channel activities than the number of orthogonal codewords used in code division multiple access (CDMA)-based communication systems through downlink statistical multiplexing. First, we introduce two different types of OCHM systems together with orthogonal codeword allocation strategies, and then derive their mathematical expression for log-likelihood ratio (LLR) values according to the two different schemes. Next, when a turbo encoder based on the LLR computation is used, we evaluate performance on the frame error rate (FER) for the aformentioned OCHM system. For comparison, we also show performance for the existing symbol mapping method using multiple antennas, which was used in 3GPP standards. As a result, it is shown that our OCHM system with multiple antennas based on the proposed orthogonal codeword allocation strategy leads to performance gain over the conventional system---energy required to satisfy a target FER is significantly reduced.

Improvement of Rating Curve Fitting Considering Variance Function with Pseudo-likelihood Estimation (의사우도추정법에 의한 분산함수를 고려한 수위-유량 관계 곡선 산정법 개선)

  • Lee, Woo-Seok;Kim, Sang-Ug;Chung, Eun-Sung;Lee, Kil-Seong
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2008.05a
    • /
    • pp.1770-1773
    • /
    • 2008
  • 수위-유량 관계 곡선식에 포함되어져 있는 매개변수를 추정하기 위해 많이 사용되는 로그선형 회귀분석은 잔차의 비등분산성(heterocesdascity)을 고려하지 못하므로 본 연구에서는 의사우도추정법(Pseudo-likelihood Estimation, P-LE)에 의해 분산함수를 추정하고 이와 함께 회귀계수를 추정할 수 있는 방법을 제시하였다. 이 과정에서 제시된 회귀잔차를 최소화하기 위하여 SA(simulated annealing)이라는 전역 최적화 알고리즘을 적용하였다. 또한 수위-유량 관계 곡선식은 단면 등의 영향으로 인해 구간에 따라 각각 다르게 구축되어져야 하므로 이를 보다 객관적으로 판단하고 분리 위치를 정확히 추정하기 위하여 Heaviside 함수를 의사우도함수에 포함시켜 결과를 추정하도록 하였으며, 2개의 구간을 가지는 유량자료를 이용하여 제시된 방법의 합리성을 통계적으로 실험하였다. 이와 같이 통계적 실험을 통해 제시된 방법들이 기존 방법과 비교하여 가질 수 있는 장점을 파악하였으며, 제시된 방법들을 금강유역 5개 지점에서 대해 수행하여 효율성을 검증하였다.

  • PDF