• Title/Summary/Keyword: Sampling techniques

Search Result 637, Processing Time 0.025 seconds

On Some Distributions Generated by Riff-Shuffle Sampling

  • Son M.S.;Hamdy H.I.
    • International Journal of Contents
    • /
    • v.2 no.2
    • /
    • pp.17-24
    • /
    • 2006
  • The work presented in this paper is divided into two parts. The first part presents finite urn problems which generate truncated negative binomial random variables. Some combinatorial identities that arose from the negative binomial sampling and truncated negative binomial sampling are established. These identities are constructed and serve important roles when we deal with these distributions and their characteristics. Other important results including cumulants and moments of the distributions are given in somewhat simple forms. Second, the distributions of the maximum of two chi-square variables and the distributions of the maximum correlated F-variables are then derived within the negative binomial sampling scheme. Although multinomial theory applied to order statistics and standard transformation techniques can be used to derive these distributions, the negative binomial sampling approach provides more information and deeper insight regarding the nature of the relationship between the sampling vehicle and the probability distributions of these functions of chi-square variables. We also provide an algorithm to compute the percentage points of these distributions. We supplement our findings with exact simple computational methods where no interpolations are involved.

  • PDF

The Role of Negative Binomial Sampling In Determining the Distribution of Minimum Chi-Square

  • Hamdy H.I.;Bentil Daniel E.;Son M.S.
    • International Journal of Contents
    • /
    • v.3 no.1
    • /
    • pp.1-8
    • /
    • 2007
  • The distributions of the minimum correlated F-variable arises in many applied statistical problems including simultaneous analysis of variance (SANOVA), equality of variance, selection and ranking populations, and reliability analysis. In this paper, negative binomial sampling technique is employed to derive the distributions of the minimum of chi-square variables and hence the distributions of the minimum correlated F-variables. The work presented in this paper is divided in two parts. The first part is devoted to develop some combinatorial identities arised from the negative binomial sampling. These identities are constructed and justified to serve important purpose, when we deal with these distributions or their characteristics. Other important results including cumulants and moments of these distributions are also given in somewhat simple forms. Second, the distributions of minimum, chisquare variable and hence the distribution of the minimum correlated F-variables are then derived within the negative binomial sampling framework. Although, multinomial theory applied to order statistics and standard transformation techniques can be used to derive these distributions, the negative binomial sampling approach provides more information regarding the nature of the relationship between the sampling vehicle and the probability distributions of these functions of chi-square variables. We also provide an algorithm to compute the percentage points of the distributions. The computation methods we adopted are exact and no interpolations are involved.

Weighted Latin Hypercube Sampling to Estimate Clearance-to-stop for Probabilistic Design of Seismically Isolated Structures in Nuclear Power Plants

  • Han, Minsoo;Hong, Kee-Jeung;Cho, Sung-Gook
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.22 no.2
    • /
    • pp.63-75
    • /
    • 2018
  • This paper proposes extension of Latin Hypercube Sampling (LHS) to avoid the necessity of using intervals with the same probability area where intervals with different probability areas are used. This method is called Weighted Latin Hypercube Sampling (WLHS). This paper describes equations and detail procedure necessary to apply weight function to WLHS. WLHS is verified through numerical examples by comparing the estimated distribution parameters with those from other methods such as Random Sampling and Latin Hypercube Sampling. WLHS provides more flexible way on selecting samples than LHS. Accuracy of WLHS estimation on distribution parameters is depending on the selection of weight function. The proposed WLHS is applied to seismically isolated structures in nuclear power plants. In this application, clearance-to-stops (CSs) calculated using LHS proposed by Huang et al. [1] and WLHS proposed in this paper, respectively, are compared to investigate the effect of choosing different sampling techniques.

Candidate Points and Representative Cross-Validation Approach for Sequential Sampling (후보점과 대표점 교차검증에 의한 순차적 실험계획)

  • Kim, Seung-Won;Jung, Jae-Jun;Lee, Tae-Hee
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.31 no.1 s.256
    • /
    • pp.55-61
    • /
    • 2007
  • Recently simulation model becomes an essential tool for analysis and design of a system but it is often expensive and time consuming as it becomes complicate to achieve reliable results. Therefore, high-fidelity simulation model needs to be replaced by an approximate model, the so-called metamodel. Metamodeling techniques include 3 components of sampling, metamodel and validation. Cross-validation approach has been proposed to provide sequnatially new sample point based on cross-validation error but it is very expensive because cross-validation must be evaluated at each stage. To enhance the cross-validation of metamodel, sequential sampling method using candidate points and representative cross-validation is proposed in this paper. The candidate and representative cross-validation approach of sequential sampling is illustrated for two-dimensional domain. To verify the performance of the suggested sampling technique, we compare the accuracy of the metamodels for various mathematical functions with that obtained by conventional sequential sampling strategies such as maximum distance, mean squared error, and maximum entropy sequential samplings. Through this research we team that the proposed approach is computationally inexpensive and provides good prediction performance.

Storage of laboratory animal blood samples causes hemorheological alterations : Inter-species differences and the effects of duration and temperature

  • Nemeth, Norbert;Baskurt, Oguz K.;Meiselman, Herbert J.;Kiss, Ferenc;Uyuklu, Mehmet;Hever, Timea;Sajtos, Erika;Kenyeres, Peter;Toth, Kalman;Furka, Istvan;Miko, Iren
    • Korea-Australia Rheology Journal
    • /
    • v.21 no.2
    • /
    • pp.127-133
    • /
    • 2009
  • Hemorheological results may be influenced by the time between blood sampling and measurement, and storage conditions (e.g., temperature, time) during sample delivery between laboratories may further affect the resulting data. This study examined possible hemorheological alterations subsequent to storage of rat and dog blood at room temperature ($22^{\circ}C$) or with cooling ($4{\sim}10^{\circ}C$) for 2, 4, 6, 24, 48 and 72 hours. Measured hemorheological parameters included hematological indices, RBC aggregation and RBC deformability. Our results indicate that marked changes of RBC deformability and of RBC aggregation in whole blood can occur during storage, especially for samples stored at room temperature. The patterns of deformability and aggregation changes at room temperature are complex and species specific, whereas those for storage at the lower temperature range are much less complicated. For room temperature storage, it thus seems logical to suggest measuring rat and dog cell deformability within 6 hours; aggregation should be measured immediately for rat blood or within 6 hours for dog blood. Storage at lower temperatures allows measuring EI up to 72 hours after sampling, while aggregation must be measured immediately, or if willing to accept a constant decrease, over 24~72 hours.

On the Effectiveness of Centering, Interpolation and Extrapolation in Estimating the Mean of a Population with Linear Trend

  • Kim, Hyuk-Joo;Jung, Sun-Ju
    • Journal of the Korean Data and Information Science Society
    • /
    • v.13 no.2
    • /
    • pp.365-379
    • /
    • 2002
  • We apply the techniques of interpolation and extrapolation to derive a new estimator based on centered modified systematic sampling for the mean of a population which has a linear trend. The efficiency of the proposed estimation method is compared with that of various existing methods. An illustrative numerical example is given.

  • PDF

Investigation and Assesment of Ground Contamination around Waste Landfill (폐기물매립지 주변 토양 및 지하수 오염도 조사 및 분석)

  • 정하익;김상근
    • Proceedings of the Korean Society of Soil and Groundwater Environment Conference
    • /
    • 2000.05a
    • /
    • pp.116-119
    • /
    • 2000
  • There has been a steady increase in geoenvironmental engineering projects where geotechnical engineering has been combined with environmental concerns. Many of these projects involve some investigation on contaminant and leachate flume in the ground and landfill. In this study, investigation and assesment on soil and groundwater around the waste landfill was carried out. Many techniques such as drilling and sampling method were applied. As a result of this study, the concentration of soil and groundwater were investigated and analysed.

  • PDF

Training Data Sets Construction from Large Data Set for PCB Character Recognition

  • NDAYISHIMIYE, Fabrice;Gang, Sumyung;Lee, Joon Jae
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.225-234
    • /
    • 2019
  • Deep learning has become increasingly popular in both academic and industrial areas nowadays. Various domains including pattern recognition, Computer vision have witnessed the great power of deep neural networks. However, current studies on deep learning mainly focus on quality data sets with balanced class labels, while training on bad and imbalanced data set have been providing great challenges for classification tasks. We propose in this paper a method of data analysis-based data reduction techniques for selecting good and diversity data samples from a large dataset for a deep learning model. Furthermore, data sampling techniques could be applied to decrease the large size of raw data by retrieving its useful knowledge as representatives. Therefore, instead of dealing with large size of raw data, we can use some data reduction techniques to sample data without losing important information. We group PCB characters in classes and train deep learning on the ResNet56 v2 and SENet model in order to improve the classification performance of optical character recognition (OCR) character classifier.

Numerical Quadrature Techniques for Inverse Fourier Transform in Two-Dimensional Resistivity Modeling (2차원 전기비저항 모델링에서 후리에역변환의 수치구적법)

  • Kim, Hee Joon
    • Economic and Environmental Geology
    • /
    • v.25 no.1
    • /
    • pp.73-77
    • /
    • 1992
  • This paper compares numerical quadrature techniques for computing an inverse Fourier transform integral in two-dimensional resistivity modeling. The quadrature techniques using exponential and cubic spline interpolations are examined for the case of a homogeneous earth model. In both methods the integral over the interval from 0 to ${\lambda}_{min}$, where ${\lambda}_{min}$, is the minimum sampling spatial wavenumber, is calculated by approximating Fourier transformed potentials to a logarithmic function. This scheme greatly reduces the inverse Fourier transform error associated with the logarithmic discontinuity at ${\lambda}=0$. Numrical results show that, if the sampling intervals are adequate, the cubic spline interpolation method is more accurate than the exponential interpolation method.

  • PDF

Sampling-based Block Erase Table in Wear Leveling Technique for Flash Memory

  • Kim, Seon Hwan;Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.5
    • /
    • pp.1-9
    • /
    • 2017
  • Recently, flash memory has been in a great demand from embedded system sectors for storage devices. However, program/erase (P/E) cycles per block are limited on flash memory. For the limited number of P/E cycles, many wear leveling techniques are studied. They prolonged the life time of flash memory using information tables. As one of the techniques, block erase table (BET) method using a bit array table was studied for embedded devices. However, it has a disadvantage in that performance of wear leveling is sharply low, when the consumption of memory is reduced. To solve this problem, we propose a novel wear leveling technique using Sampling-based Block Erase Table (SBET). SBET relates one bit of the bit array table to each block by using exclusive OR operation with round robin function. Accordingly, SBET enhances accuracy of cold block information and can prevent to decrease the performance of wear leveling. In our experiment, SBET prolongs life time of flash memory by up to 88%, compared with previous techniques which use a bit array table.