• Title/Summary/Keyword: sample of random size

Search Result 205, Processing Time 0.026 seconds

Tail Probability Approximations for the Ratio of the Independent Random Variables

  • Cho, Dae-Hyeon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.7 no.2
    • /
    • pp.189-201
    • /
    • 1996
  • In this paper, we study the saddlepoint approximations for the ratio of independent random variables. In Section 2, we derive the saddlepoint approximation to the density. And in Section 3, we derive two approximation formulae for the tail probability, one by following Daniels'(1987) method and the other by following Lugannani and Rice's (1980). In Section 4, we represent some numerical examples which show that the errors are small even for small sample size.

  • PDF

Determination of sample size to serological surveillance plan for pullorum disease and fowl typhoid (추백리-가금티푸스의 혈청학적 모니터링 계획수립을 위한 표본크기)

  • Pak, Son-Il;Park, Choi-Kyu
    • Korean Journal of Veterinary Research
    • /
    • v.48 no.4
    • /
    • pp.457-462
    • /
    • 2008
  • The objective of this study was to determine appropriate sample size that simulated different assumptions for diagnostic test characteristics and true prevalences when designing serological surveillance plan for pullorum disease and fowl typhoid in domestic poultry production. The number of flocks and total number of chickens to be sampled was obtained to provide 95% confidence of detecting at least one infected flock, taking imperfect diagnostic tests into account. Due to lack of reliable data, within infected flock prevalence (WFP) was assumed to follow minimum 1%, most likely 5% and maximum 9% and true flock prevalence of 0.1%, 0.5% and 1% in order. Sensitivity were modeled using the Pert distribution: minimum 75%, most likely 80% and maximum 90% for plate agglutination test and 80%, 85%, and 90% for ELISA test. Similarly, the specificity was modeled 85%, 90%, 95% for plate agglutination test and 90%, 95%, 99% for ELISA test. In accordance with the current regulation, flock-level test characteristics calculated assuming that 30 samples are taken from per flock. The model showed that the current 112,000 annual number of testing plan which is based on random selection of flocks is far beyond the sample size estimated in this study. The sample size was further reduced with increased sensitivity and specificity of the test and decreased WFP. The effect of increasing samples per flock on total sample size to be sampled and optimal combination of sensitivity and specificity of the test for the purpose of the surveillance is discussed regarding cost.

A Study on the Rainfall Generation (In Two-dimensional Random Storm Fields) (강우의 모의발생에 관한 연구 (2차원 무작위 호우장에서))

  • Lee, Jea Hyoung;Soun, Jung Ho;Hwang, Man Ha
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.11 no.1
    • /
    • pp.109-116
    • /
    • 1991
  • In recent years, hydrologists have been interested in the radial spectrum and its estimation in two dimensional storm field to construct simulation model of the rainfall. This paper deals with the problem of transformation from the spectrum or isotropic covariance function to two dimensional random field. The extended turning band method for the generation of random field is applied to the problem using the line generation method of one dimensional stochastic process by G.Matheron. Examples of this generation is chosen in the random components of the multidimensional rainfall model suggested by Bras and are given with a comparison between theoretical and sample statistics. In this numerical experiments it is observed that first and second order statistics can be conserved. Also the example of moving storm simulation through Bras model is presented with the appropriate parameters and sample size.

  • PDF

Reduction in Sample Size for Efficient Monte Carlo Localization (효율적인 몬테카를로 위치추정을 위한 샘플 수의 감소)

  • Yang Ju-Ho;Song Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.5
    • /
    • pp.450-456
    • /
    • 2006
  • Monte Carlo localization is known to be one of the most reliable methods for pose estimation of a mobile robot. Although MCL is capable of estimating the robot pose even for a completely unknown initial pose in the known environment, it takes considerable time to give an initial pose estimate because the number of random samples is usually very large especially for a large-scale environment. For practical implementation of MCL, therefore, a reduction in sample size is desirable. This paper presents a novel approach to reducing the number of samples used in the particle filter for efficient implementation of MCL. To this end, the topological information generated through the thinning technique, which is commonly used in image processing, is employed. The global topological map is first created from the given grid map for the environment. The robot then scans the local environment using a laser rangefinder and generates a local topological map. The robot then navigates only on this local topological edge, which is likely to be similar to the one obtained off-line from the given grid map. Random samples are drawn near the topological edge instead of being taken with uniform distribution all over the environment, since the robot traverses along the edge. Experimental results using the proposed method show that the number of samples can be reduced considerably, and the time required for robot pose estimation can also be substantially decreased without adverse effects on the performance of MCL.

Reduction in Sample Size Using Topological Information for Monte Carlo Localization

  • Yang, Ju-Ho;Song, Jae-Bok;Chung, Woo-Jin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.901-905
    • /
    • 2005
  • Monte Carlo localization is known to be one of the most reliable methods for pose estimation of a mobile robot. Much research has been done to improve performance of MCL so far. Although MCL is capable of estimating the robot pose even for a completely unknown initial pose in the known environment, it takes considerable time to give an initial estimate because the number of random samples is usually very large especially for a large-scale environment. For practical implementation of the MCL, therefore, a reduction in sample size is desirable. This paper presents a novel approach to reducing the number of samples used in the particle filter for efficient implementation of MCL. To this end, the topological information generated off- line using a thinning method, which is commonly used in image processing, is employed. The topological map is first created from the given grid map for the environment. The robot scans the local environment using a laser rangefinder and generates a local topological map. The robot then navigates only on this local topological edge, which is likely to be the same as the one obtained off- line from the given grid map. Random samples are drawn near the off-line topological edge instead of being taken with uniform distribution, since the robot traverses along the edge. In this way, the sample size required for MCL can be drastically reduced, thus leading to reduced initial operation time. Experimental results using the proposed method show that the number of samples can be reduced considerably, and the time required for robot pose estimation can also be substantially decreased.

  • PDF

Generative Adversarial Networks for single image with high quality image

  • Zhao, Liquan;Zhang, Yupeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4326-4344
    • /
    • 2021
  • The SinGAN is one of generative adversarial networks that can be trained on a single nature image. It has poor ability to learn more global features from nature image, and losses much local detail information when it generates arbitrary size image sample. To solve the problem, a non-linear function is firstly proposed to control downsampling ratio that is ratio between the size of current image and the size of next downsampled image, to increase the ratio with increase of the number of downsampling. This makes the low-resolution images obtained by downsampling have higher proportion in all downsampled images. The low-resolution images usually contain much global information. Therefore, it can help the model to learn more global feature information from downsampled images. Secondly, the attention mechanism is introduced to the generative network to increase the weight of effective image information. This can make the network learn more local details. Besides, in order to make the output image more natural, the TVLoss function is introduced to the loss function of SinGAN, to reduce the difference between adjacent pixels and smear phenomenon for the output image. A large number of experimental results show that our proposed model has better performance than other methods in generating random samples with fixed size and arbitrary size, image harmonization and editing.

Parameter Estimation in Debris Flow Deposition Model Using Pseudo Sample Neural Network (의사 샘플 신경망을 이용한 토석류 퇴적 모델의 파라미터 추정)

  • Heo, Gyeongyong;Lee, Chang-Woo;Park, Choong-Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.11
    • /
    • pp.11-18
    • /
    • 2012
  • Debris flow deposition model is a model to predict affected areas by debris flow and random walk model (RWM) was used to build the model. Although the model was proved to be effective in the prediction of affected areas, the model has several free parameters decided experimentally. There are several well-known methods to estimate parameters, however, they cannot be applied directly to the debris flow problem due to the small size of training data. In this paper, a modified neural network, called pseudo sample neural network (PSNN), was proposed to overcome the sample size problem. In the training phase, PSNN uses pseudo samples, which are generated using the existing samples. The pseudo samples smooth the solution space and reduce the probability of falling into a local optimum. As a result, PSNN can estimate parameter more robustly than traditional neural networks do. All of these can be proved through the experiments using artificial and real data sets.

Effective Sample Sizes for the Test of Mean Differences Based on Homogeneity Test

  • Heo, Sunyeong
    • Journal of Integrative Natural Science
    • /
    • v.12 no.3
    • /
    • pp.91-99
    • /
    • 2019
  • Many researchers in various study fields use the two sample t-test to confirm their treatment effects. The two sample t-test is generally used for small samples, and assumes that two independent random samples are selected from normal populations, and the population variances are unknown. Researchers often conduct F-test, the test of equality of variances, before testing the treatment effects, and the test statistic or confidence interval for the two sample t-test has two formats according to whether the variances are equal or not. Researchers using the two sample t-test often want to know how large sample sizes they need to get reliable test results. This research gives some guidelines for sample sizes to them through simulation works. The simulation had run for normal populations with the different ratios of two variances for different sample sizes (${\leq}30$). The simulation results are as follows. First, if one has no idea equality of variances but he/she can assume the difference is moderate, it is safe to use sample size at least 20 in terms of the nominal level of significance. Second, the power of F-test for the equality of variances is very low when the sample sizes are small (<30) even though the ratio of two variances is equal to 2. Third, the sample sizes at least 10 for the two sample t-test are recommendable in terms of the nominal level of significance and the error limit.

Development of a Sampling Strategy and Sample Size Calculation to Estimate the Distribution of Mammographic Breast Density in Korean Women

  • Jun, Jae Kwan;Kim, Mi Jin;Choi, Kui Son;Suh, Mina;Jung, Kyu-Won
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.13 no.9
    • /
    • pp.4661-4664
    • /
    • 2012
  • Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

Analysis of the Process Capability Index According to the Sample Size of Multi-Measurement (다측정 표본크기에 대한 공정능력지수 분석)

  • Lee, Do-Kyung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.42 no.1
    • /
    • pp.151-157
    • /
    • 2019
  • This study is about the process capability index (PCI). In this study, we introduce several indices including the index $C_{PR}$ and present the characteristics of the $C_{PR}$ as well as its validity. The difference between the other indices and the $C_{PR}$ is the way we use to estimate the standard deviation. Calculating the index, most indices use sample standard deviation while the index $C_{PR}$ uses range R. The sample standard deviation is generally a better estimator than the range R. But in the case of the panel process, the $C_{PR}$ has more consistency than the other indices at the point of non-conforming ratio which is an important term in quality control. The reason why the $C_{PR}$ using the range has better consistency is explained by introducing the concept of 'flatness ratio'. At least one million cells are present in one panel, so we can't inspect all of them. In estimating the PCI, it is necessary to consider the inspection cost together with the consistency. Even though we want smaller sample size at the point of inspection cost, the small sample size makes the PCI unreliable. There is 'trade off' between the inspection cost and the accuracy of the PCI. Therefore, we should obtain as large a sample size as possible under the allowed inspection cost. In order for $C_{PR}$ to be used throughout the industry, it is necessary to analyze the characteristics of the $C_{PR}$. Because the $C_{PR}$ is a kind of index including subgroup concept, the analysis should be done at the point of sample size of the subgroup. We present numerical analysis results of $C_{PR}$ by the data from the random number generating method. In this study, we also show the difference between the $C_{PR}$ using the range and the $C_P$ which is a representative index using the sample standard deviation. Regression analysis was used for the numerical analysis of the sample data. In addition, residual analysis and equal variance analysis was also conducted.