• Title/Summary/Keyword: Classical Statistical Method

Search Result 109, Processing Time 0.022 seconds

Saddlepoint approximation for distribution function of sample mean of skew-normal distribution (왜정규 표본평균의 분포함수에 대한 안장점근사)

  • Na, Jong-Hwa;Yu, Hye-Kyung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.6
    • /
    • pp.1211-1219
    • /
    • 2013
  • Recently, the usage of skew-normal distribution, instead of classical normal distribution, is rising up in many statistical theories and applications. In this paper, we deal with saddlepoint approximation for the distribution function of sample mean of skew-normal distribution. Comparing to normal approximation, saddlepoint approximation provides very accurate results in small sample sizes as well as for large or moderate sample sizes. Saddlepoint approximations related to the skew-normal distribution, suggested in this paper, can be used as a approximate approach to the classical method of Gupta and Chen (2001) and Chen et al. (2004) which need very complicate calculations. Through simulation study, we verified the accuracy of the suggested approximation and applied the approximation to Robert's (1966) twin data.

Camera calibration parameters estimation using perspective variation ratio of grid type line widths (격자형 선폭들의 투영변화비를 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Choi, Seong-Gu;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.30-32
    • /
    • 2004
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as lens distortion, focal length, scale factor, pose, orientations, and distance. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1, 2, 3, 4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. The average scale factor tends to fluctuate with small variation and makes distance error decrease. Compared with classical methods that use stereo camera or two or three orthogonal planes, the proposed method is easy to use and flexible. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

Bayesian Clustering of Prostate Cancer Patients by Using a Latent Class Poisson Model (잠재그룹 포아송 모형을 이용한 전립선암 환자의 베이지안 그룹화)

  • Oh Man-Suk
    • The Korean Journal of Applied Statistics
    • /
    • v.18 no.1
    • /
    • pp.1-13
    • /
    • 2005
  • Latent Class model has been considered recently by many researchers and practitioners as a tool for identifying heterogeneous segments or groups in a population, and grouping objects into the segments. In this paper we consider data on prostate cancer patients from Korean National Cancer Institute and propose a method for grouping prostate cancer patients by using latent class Poisson model. A Bayesian approach equipped with a Markov chain Monte Carlo method is used to overcome the limit of classical likelihood approaches. Advantages of the proposed Bayesian method are easy estimation of parameters with their standard errors, segmentation of objects into groups, and provision of uncertainty measures for the segmentation. In addition, we provide a method to determine an appropriate number of segments for the given data so that the method automatically chooses the number of segments and partitions objects into heterogeneous segments.

Numerical Analysis of Flow Interference at Discontinuity Junction of fracture Network (단열교차점에서 유체간섭에 관한 수치적 고찰)

  • 박영진;이강근;이승구
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.4 no.3
    • /
    • pp.111-115
    • /
    • 1997
  • Discrete fracture model has become one of the alternatives for the classical continuum model to simulate the irregular aspects of the fluid flow and the solute transport in fractured rocks. It is based on the assumptions that the discharge in a single fracture is proportional to the cube of the aperture and the fractured rock can be represented by the statistical assemblage of such single fractures. This study is intended to evaluate the effect of the fracture junction on the cubic law. Numerical solution of flow in junction system was obtained by using the Boundary-Fitted Coordinate System (BFCS) method. Results with different intersection angles in crossing fractures show that the geometry of the junction affects the discharge pattern under the same simulation conditions. Therefore, strict numerical and experimental examinations on this subject are required.

  • PDF

Type I Analysis by Projections (사영에 의한 제1종 분석)

  • Choi, Jae-Sung
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.2
    • /
    • pp.373-381
    • /
    • 2011
  • This paper discusses how to get the sums of squares due to treatment factors when Type I Analysis is used by projections for the analysis of data under the assumption of a two-way ANOVA model. The suggested method does not need to calculate the residual sums of squares for the calculation of sums of squares. There-fore, the calculation is easier and faster than classical ANOVA methods. It also discusses how eigenvectors and eigenvalues of the projection matrices can be used to get the calculation of sums of squares. An example is given to illustrate the calculation procedure by projections for unbalanced data.

A Divisive Clustering for Mixed Feature-Type Symbolic Data (혼합형태 심볼릭 데이터의 군집분석방법)

  • Kim, Jaejik
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.6
    • /
    • pp.1147-1161
    • /
    • 2015
  • Nowadays we are considering and analyzing not only classical data expressed by points in the p-dimensional Euclidean space but also new types of data such as signals, functions, images, and shapes, etc. Symbolic data also can be considered as one of those new types of data. Symbolic data can have various formats such as intervals, histograms, lists, tables, distributions, models, and the like. Up to date, symbolic data studies have mainly focused on individual formats of symbolic data. In this study, it is extended into datasets with both histogram and multimodal-valued data and a divisive clustering method for the mixed feature-type symbolic data is introduced and it is applied to the analysis of industrial accident data.

Controlling Linkage Disequilibrium in Association Tests: Revisiting APOE Association in Alzheimer's Disease

  • Park, Lee-Young
    • Genomics & Informatics
    • /
    • v.5 no.2
    • /
    • pp.61-67
    • /
    • 2007
  • The allele frequencies of markers as well as linkage disequilibrium (LD) can be changed in cases due to the LD between markers and the disease allele, exhibiting spurious associations of markers. To identify the true association, classical statistical tests for dealing with confounders have been applied to draw a conclusion as to whether the association of variants comes from LD with the known disease allele. However, a more direct test considering LD using estimated haplotype frequencies may be more efficient. The null hypothesis is that the different allele frequencies of a variant between cases and controls come solely from the increased disease allele frequency and the LD relationship with the disease allele. The haplotype frequencies of controls are estimated using the expectation maximization (EM) algorithm from the genotype data. The estimated frequencies are applied to calculate the expected haplotype frequencies in cases corresponding to the increase or decrease of the causative or protective alleles. The suggested method was applied to previously published data, and several APOE variants showed association with Alzheimer's disease independent from the APOE ${\varepsilon}4$ variant, rs429358, regardless of LD showing significant simulated p-values. The test results support the possibility that there may be more than one common disease variant in a locus.

Evaluation of Demerit-CUSUM Control Chart Performance Using Fast Initial Response (FIR을 이용한 Demerit-CUSUM 관리도의 수행도 평가)

  • Kang, Hae-Woon;Kang, Chang-Wook;Baik, Jae-Won;Nam, Sung-Ho
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.32 no.1
    • /
    • pp.94-101
    • /
    • 2009
  • Complex Products may present more than one type of defects and these defects are not always of equal severity. These defects are classified according to their seriousness and effect on product quality and performance. Demerit systems are very effective systems to monitoring the different types of defects. So, classical demerit control chart used to monitor counts of several different types of defects simultaneously in complex products. S.M. Na et al.(2003) proposed the Demerit-CUSUM for the improvement of the demerit control chart performance and Nembhard, D. A. et al.(2001) and G.Y Cho et al.(2004) developed a Demerit control chart using the EWMA technique and evaluated the performance of the control chart. In this paper, we present an effective method for process control using the Demerit-CUSUM with fast initial response. Moreover, we evaluate exact performance of the Demerit-CUSUM control chart with fast initial response, Demerit-CUSUM and Demerit-EWMA according to changing sample size or parameters.

A New Fuzzy Key Generation Method Based on PHY-Layer Fingerprints in Mobile Cognitive Radio Networks

  • Gao, Ning;Jing, Xiaojun;Sun, Songlin;Mu, Junsheng;Lu, Xiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.7
    • /
    • pp.3414-3434
    • /
    • 2016
  • Classical key generation is complicated to update and key distribution generally requires fixed infrastructures. In order to eliminate these restrictions researchers have focused much attention on physical-layer (PHY-layer) based key generation methods. In this paper, we present a PHY-layer fingerprints based fuzzy key generation scheme, which works to prevent primary user emulation (PUE) attacks and spectrum sensing data falsification (SSDF) attacks, with multi-node collaborative defense strategies. We also propose two algorithms, the EA algorithm and the TA algorithm, to defend against eavesdropping attacks and tampering attacks in mobile cognitive radio networks (CRNs). We give security analyses of these algorithms in both the spatial and temporal domains, and prove the upper bound of the entropy loss in theory. We present a simulation result based on a MIMO-OFDM communication system which shows that the channel response characteristics received by legitimates tend to be consistent and phase characteristics are much more robust for key generation in mobile CRNs. In addition, NIST statistical tests show that the generated key in our proposed approach is secure and reliable.

Representing variables in the latent space (분석변수들의 잠재공간 표현)

  • Huh, Myung-Hoe
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.4
    • /
    • pp.555-566
    • /
    • 2017
  • For multivariate datasets with large number of variables, classical dimensional reduction methods such as principal component analysis may not be effective for data visualization. The underlying reason is that the dimensionality of the space of variables is often larger than two or three, while the visualization to the human eye is most effective with two or three dimensions. This paper proposes a working procedure which first partitions the variables into several "latent" clusters, explores individual data subsets, and finally integrates findings. We use R pakacage "ClustOfVar" for partitioning variables around latent dimensions and the principal component biplot method to visualize within-cluster patterns. Additionally, we use the technique for embedding supplementary variables to figure out the relationships between within-cluster variables and outside variables.