• Title/Summary/Keyword: Statistical Analyses

Search Result 2,233, Processing Time 0.025 seconds

Secure Steganographic Algorithm against Statistical analyses (통계분석에 강인한 심층 암호)

  • 유정재;오승철;이광수;이상진;박일환
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.14 no.1
    • /
    • pp.15-23
    • /
    • 2004
  • Westfeld analyzed a sequential LSB embedding steganography effectively through the $\chi$$^2$statistical test which measures the frequencies of PoVs(pairs of values). Fridrich also proposed another statistical analysis, so-called RS steganalysis by which the embedding message rate can be estimated. This method is based on the partition of pixels as three groups; Regular, Singular, Unusable groups. In this paper, we propose a new steganographic scheme which preserves the above two statistics. The proposed scheme embeds the secret message in the innocent image by randomly adding one to real pixel value or subtracting one from it, then adjusts the statistical measures to equal those of the original image.

Novel approach to predicting the release probability when applying the MARSSIM statistical test to a survey unit with a specific residual radioactivity distribution based on Monte Carlo simulation

  • Chun, Ga Hyun;Cheong, Jae Hak
    • Nuclear Engineering and Technology
    • /
    • v.54 no.5
    • /
    • pp.1606-1615
    • /
    • 2022
  • For investigating whether the MARSSIM nonparametric test has sufficient statistical power when a site has a specific contamination distribution before conducting a final status survey (FSS), a novel approach was proposed to predict the release probability of the site. Five distributions were assumed: lognormal distribution, normal distribution, maximum extreme value distribution, minimum extreme value distribution, and uniform distribution. Hypothetical radioactivity populations were generated for each distribution, and Sign tests were performed to predict the release probabilities after extracting samples using Monte Carlo simulations. The designed Type I error (0.01, 0.05, and 0.1) was always satisfied for all distributions, while the designed Type II error (0.01, 0.05, and 0.1) was not always met for the uniform, maximum extreme value, and lognormal distributions. Through detailed analyses for lognormal and normal distributions which are often found for contaminants in actual environmental or soil samples, it was found that a greater statistical power was obtained from survey units with normal distribution than with lognormal distribution. This study is expected to contribute to achieving the designed decision error when the contamination distribution of a survey unit is identified, by predicting whether the survey unit passes the statistical test before undertaking the FSS according to MARSSIM.

Tests for homogeneity of proportions in clustered binomial data

  • Jeong, Kwang Mo
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.5
    • /
    • pp.433-444
    • /
    • 2016
  • When we observe binary responses in a cluster (such as rat lab-subjects), they are usually correlated to each other. In clustered binomial counts, the independence assumption is violated and we encounter an extra-variation. In the presence of extra-variation, the ordinary statistical analyses of binomial data are inappropriate to apply. In testing the homogeneity of proportions between several treatment groups, the classical Pearson chi-squared test has a severe flaw in the control of Type I error rates. We focus on modifying the chi-squared statistic by incorporating variance inflation factors. We suggest a method to adjust data in terms of dispersion estimate based on a quasi-likelihood model. We explain the testing procedure via an illustrative example as well as compare the performance of a modified chi-squared test with competitive statistics through a Monte Carlo study.

Dual Generalized Maximum Entropy Estimation for Panel Data Regression Models

  • Lee, Jaejun;Cheon, Sooyoung
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.5
    • /
    • pp.395-409
    • /
    • 2014
  • Data limited, partial, or incomplete are known as an ill-posed problem. If the data with ill-posed problems are analyzed by traditional statistical methods, the results obviously are not reliable and lead to erroneous interpretations. To overcome these problems, we propose a dual generalized maximum entropy (dual GME) estimator for panel data regression models based on an unconstrained dual Lagrange multiplier method. Monte Carlo simulations for panel data regression models with exogeneity, endogeneity, or/and collinearity show that the dual GME estimator outperforms several other estimators such as using least squares and instruments even in small samples. We believe that our dual GME procedure developed for the panel data regression framework will be useful to analyze ill-posed and endogenous data sets.

STATISTICAL VALIDATION OF SYMMETRY IN ESTIMATION OF GROUNDWATER CONTAMINANT CONCENTRATIONS

  • Cho, Choon-Kyung;Sungkwon Kang
    • Journal of applied mathematics & informatics
    • /
    • v.13 no.1_2
    • /
    • pp.335-351
    • /
    • 2003
  • Spatial distribution of groundwater contaminant concentration has special characteristics such as approximate symmetric profile, for example, in the transversal direction to groundwater flow direction, a certain ratio in directional propagation distances, etc. To obtain a geophysically appropriate semivariogram which is a key factor in estimation of groundwater contaminant concentration at desired locations, these special characteristics should be considered. In this paper, a method for finding appropriate symmetric axes is introduced. Statistical analyses for the choices of symmetric axes and mathematical models for semivariograrns are performed. After implementing symmetry, the corresponding semivariograrns, kriging variances, and final estimated results show significant improvement compared with those obtained by conventional approaches which usually do not account for symmetry.

Wave Transmission Analysis of Semi-infinite Mindlin Plates Coupled at an Arbitrary Angle (임의의 각으로 연성된 반무한 Mindlin 판의 파동전달해석)

  • Park, Young-Ho
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.24 no.12
    • /
    • pp.999-1006
    • /
    • 2014
  • Mindlin plate theory includes the shear deformation and rotatory inertia effects which cannot be negligible as exciting frequency increases. The statistical methods such as energy flow analysis(EFA) and statistical energy analysis(SEA) are very useful for estimation of structure-borne sound of various built-up structures. For the reliable vibrational analysis of built-up structures at high frequencies, the energy transfer relationship between out-of-plane waves and in-plane waves exist in Mindlin plates coupled at arbitrary angles must be derived. In this paper, the new wave transmission analysis is successfully performed for various energy analyses of Mindlin plates coupled at arbitrary angles.

Analysis of Repeated Measures Data: Chronic Renal Allograft Dysfunction Data from the Renal Transplanted Patients (반복측정자료 분석에 대한 고찰: 신장이식 환자의 신기능 부전 연구를 중심으로)

  • 박태성;이승연;성건형;강종명;강경원
    • The Korean Journal of Applied Statistics
    • /
    • v.11 no.2
    • /
    • pp.205-219
    • /
    • 1998
  • Statistical analyses have been perf7rm7d to find factors affecting chronic renal allograft dysfunction for 114 renal transplanted patients. Renal function was evaluated using serum creatinine values every three months during 1 year to 5 years after transplantation. Statistical models for the repeated measures were considered to evaluate factors affecting the reciprocal of serum creatinine values. This paper focuses on some common problems on the choice of correlation matrices occurred in the analysis of repeated measures.

  • PDF

A numerical study on group quantile regression models

  • Kim, Doyoen;Jung, Yoonsuh
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.4
    • /
    • pp.359-370
    • /
    • 2019
  • Grouping structures in covariates are often ignored in regression models. Recent statistical developments considering grouping structure shows clear advantages; however, reflecting the grouping structure on the quantile regression model has been relatively rare in the literature. Treating the grouping structure is usually conducted by employing a group penalty. In this work, we explore the idea of group penalty to the quantile regression models. The grouping structure is assumed to be known, which is commonly true for some cases. For example, group of dummy variables transformed from one categorical variable can be regarded as one group of covariates. We examine the group quantile regression models via two real data analyses and simulation studies that reveal the beneficial performance of group quantile regression models to the non-group version methods if there exists grouping structures among variables.

Correlation plot for a contingency table

  • Hong, Chong Sun;Oh, Tae Gyu
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.3
    • /
    • pp.295-305
    • /
    • 2021
  • Most graphical representation methods for two-dimensional contingency tables are based on the frequencies, probabilities, association measures, and goodness-of-fit statistics. In this work, a method is proposed to represent the correlation coefficients for each of the two selected levels of the row and column variables. Using the correlation coefficients, one can obtain the vector-matrix that represents the angle corresponding to each cell. Thus, these vectors are represented as a unit circle with angles. This is called a CC plot, which is a correlation plot for a contingency table. When the CC plot is used with other graphical methods as well as statistical models, more advanced analyses including the relationship among the cells of the row or column variables could be derived.

Introduction to Mediation Analysis and Examples of Its Application to Real-world Data

  • Jung, Sun Jae
    • Journal of Preventive Medicine and Public Health
    • /
    • v.54 no.3
    • /
    • pp.166-172
    • /
    • 2021
  • Traditional epidemiological assessments, which mainly focused on evaluating the statistical association between two major components-the exposure and outcome-have recently evolved to ascertain the in-between process, which can explain the underlying causal pathway. Mediation analysis has emerged as a compelling method to disentangle the complex nature of these pathways. The statistical method of mediation analysis has evolved from simple regression analysis to causal mediation analysis, and each amendment refined the underlying mathematical theory and required assumptions. This short guide will introduce the basic statistical framework and assumptions of both traditional and modern mediation analyses, providing examples conducted with real-world data.