• 제목/요약/키워드: probability sampling

검색결과 564건 처리시간 0.026초

혼합 조건부 종추출모형을 이용한 여름철 한국지역 극한기온의 위치별 밀도함수 추정 (Density estimation of summer extreme temperature over South Korea using mixtures of conditional autoregressive species sampling model)

  • 조성일;이재용
    • Journal of the Korean Data and Information Science Society
    • /
    • 제27권5호
    • /
    • pp.1155-1168
    • /
    • 2016
  • 기상 자료의 경우 한 지역의 기후가 인접지역의 기후와 비슷한 양상을 띄고 각 지역의 확률 밀도 함수 (probability density function)가 잘 알려진 확률 모형을 따르지 않는다는 것이 알려져 있다. 본 논문에서는 이러한 특성을 고려하여 이상 기후 현상이 뚜렷히 나타나는 여름철 평균 극한 기온(extreme temperature)의 확률 밀도 함수를 추정하고자 한다. 이를 위하여 공간적 상관관계 (spatial correlation)를 고려하는 비모수 베이지안 (nonparametric Bayesian) 모형인 조건부 자기회귀 종추출 혼합모형 (mixtures of conditional autoregression species sampling model)을 이용하였다. 자료는 이스트앵글리아 대학교 (University of East Anglia)에서 제공하는 전 지구의 최대 기온과 최소 기온자료 중 우리나라에 해당하는 지역의 자료를 사용하였다.

확률비례추출법에 의한 확률화응답기법에 관한 연구 (A Study on the Randomized Response Technique by PPS Sampling)

  • 이기성
    • 응용통계연구
    • /
    • 제19권1호
    • /
    • pp.69-80
    • /
    • 2006
  • 본 연구에서는 매우 민감한 조사에서 모집단이 집락의 크기가 서로 다른 여러 개의 집락으로 구성되어 있을 때, 집락의 크기에 비례하게 추출확률을 부여하는 확률비례추출법(probability proportional to size : pps)을 이용한 확률화응답기법을 제안하고자 한다. 민감한 속성에 대한 모수의 추정치와 분산 및 분산추정량을 구하여 이론적 체계를 구축하고, 확률비례추출법에 의한 확률화응답기법과 등확률 2단계 추출법에 의한 확률화응답기법의 효율성을 비교해 보고자 한다. 또한, 실제조사를 통해 제안한 확률비례추출법에 의한 확률화응답기법에 대한 실용화의 타당성을 검토하고자 한다.

A novel reliability analysis method based on Gaussian process classification for structures with discontinuous response

  • Zhang, Yibo;Sun, Zhili;Yan, Yutao;Yu, Zhenliang;Wang, Jian
    • Structural Engineering and Mechanics
    • /
    • 제75권6호
    • /
    • pp.771-784
    • /
    • 2020
  • Reliability analysis techniques combining with various surrogate models have attracted increasing attention because of their accuracy and great efficiency. However, they primarily focus on the structures with continuous response, while very rare researches on the reliability analysis for structures with discontinuous response are carried out. Furthermore, existing adaptive reliability analysis methods based on importance sampling (IS) still have some intractable defects when dealing with small failure probability, and there is no related research on reliability analysis for structures involving discontinuous response and small failure probability. Therefore, this paper proposes a novel reliability analysis method called AGPC-IS for such structures, which combines adaptive Gaussian process classification (GPC) and adaptive-kernel-density-estimation-based IS. In AGPC-IS, an efficient adaptive strategy for design of experiments (DoE), taking into consideration the classification uncertainty, the sampling uniformity and the regional classification accuracy improvement, is developed with the purpose of improving the accuracy of Gaussian process classifier. The adaptive kernel density estimation is introduced for constructing the quasi-optimal density function of IS. In addition, a novel and more precise stopping criterion is also developed from the perspective of the stability of failure probability estimation. The efficiency, superiority and practicability of AGPC-IS are verified by three examples.

A new structural reliability analysis method based on PC-Kriging and adaptive sampling region

  • Yu, Zhenliang;Sun, Zhili;Guo, Fanyi;Cao, Runan;Wang, Jian
    • Structural Engineering and Mechanics
    • /
    • 제82권3호
    • /
    • pp.271-282
    • /
    • 2022
  • The active learning surrogate model based on adaptive sampling strategy is increasingly popular in reliability analysis. However, most of the existing sampling strategies adopt the trial and error method to determine the size of the Monte Carlo (MC) candidate sample pool which satisfies the requirement of variation coefficient of failure probability. It will lead to a reduction in the calculation efficiency of reliability analysis. To avoid this defect, a new method for determining the optimal size of the MC candidate sample pool is proposed, and a new structural reliability analysis method combining polynomial chaos-based Kriging model (PC-Kriging) with adaptive sampling region is also proposed (PCK-ASR). Firstly, based on the lower limit of the confidence interval, a new method for estimating the optimal size of the MC candidate sample pool is proposed. Secondly, based on the upper limit of the confidence interval, an adaptive sampling region strategy similar to the radial centralized sampling method is developed. Then, the k-means++ clustering technique and the learning function LIF are used to complete the adaptive design of experiments (DoE). Finally, the effectiveness and accuracy of the PCK-ASR method are verified by three numerical examples and one practical engineering example.

A Generation-based Text Steganography by Maintaining Consistency of Probability Distribution

  • Yang, Boya;Peng, Wanli;Xue, Yiming;Zhong, Ping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권11호
    • /
    • pp.4184-4202
    • /
    • 2021
  • Text steganography combined with natural language generation has become increasingly popular. The existing methods usually embed secret information in the generated word by controlling the sampling in the process of text generation. A candidate pool will be constructed by greedy strategy, and only the words with high probability will be encoded, which damages the statistical law of the texts and seriously affects the security of steganography. In order to reduce the influence of the candidate pool on the statistical imperceptibility of steganography, we propose a steganography method based on a new sampling strategy. Instead of just consisting of words with high probability, we select words with relatively small difference from the actual sample of the language model to build a candidate pool, thus keeping consistency with the probability distribution of the language model. What's more, we encode the candidate words according to their probability similarity with the target word, which can further maintain the probability distribution. Experimental results show that the proposed method can outperform the state-of-the-art steganographic methods in terms of security performance.

A Study of Circular Sampling in Finite Population

  • Hae-Yong Lee
    • Communications for Statistical Applications and Methods
    • /
    • 제3권3호
    • /
    • pp.161-168
    • /
    • 1996
  • This paper describes a sampling method, which can be used instead of the simple random sampling without replacement(SRSWOR). This method, circular sampling, assumes that the sampling units of the population are arranged in circular format, and randomly selects as many as samples of contiguous units. Therefore this method gathers information quicker and easier than STSWOR. In certain circumstances, the reliability of this method is better than that of STSWOR. And of circular sampling would be applied to nonprobability could be determined. methods, the reliability of the sample results in terms of probability could be determined.

  • PDF

On the Estimation of Fraction Defectives

  • Kim, Seong-in
    • 품질경영학회지
    • /
    • 제8권2호
    • /
    • pp.3-14
    • /
    • 1980
  • This paper is concerned with the design of an appropriate sampling plan or stopping rule and the construction of estimate for the estimation of process or lot fraction defective. Various sampling plans which are well known or have potential applications are unified into a generalized sampling plan. Under this sampling plan sufficient statistic, probability distribution, moment, and minimum variance unbiased estimate are obtained. Results for various sampling plans can be derived as special cases. Then, under given parameter values, the relative efficiencies of the various sampling plans are compared with respect to expected sample sizes and variances of estimates.

  • PDF

Importance sampling with splitting for portfolio credit risk

  • Kim, Jinyoung;Kim, Sunggon
    • Communications for Statistical Applications and Methods
    • /
    • 제27권3호
    • /
    • pp.327-347
    • /
    • 2020
  • We consider a credit portfolio with highly skewed exposures. In the portfolio, small number of obligors have very high exposures compared to the others. For the Bernoulli mixture model with highly skewed exposures, we propose a new importance sampling scheme to estimate the tail loss probability over a threshold and the corresponding expected shortfall. We stratify the sample space of the default events into two subsets. One consists of the events that the obligors with heavy exposures default simultaneously. We expect that typical tail loss events belong to the set. In our proposed scheme, the tail loss probability and the expected shortfall corresponding to this type of events are estimated by a conditional Monte Carlo, which results in variance reduction. We analyze the properties of the proposed scheme mathematically. In numerical study, the performance of the proposed scheme is compared with an existing importance sampling method.

상수도 관망 데이터의 사용목적에 관한 수집 주기 연구 (Study on the sampling rate for the purpose of use in water distribution network data)

  • 이경환;서정철;차헌주;송교신;최준모
    • 상하수도학회지
    • /
    • 제27권2호
    • /
    • pp.233-239
    • /
    • 2013
  • Sampling rate of Hydraulic pressure data, depending on the intended use of the water distribution system is an important factor. If sampling interval of hydraulic data is short, that will be more useful but it demand a lot of expense for maintenance. In this study, based on simulation of water distribution system 2 khz data, statistical techniques of student t distribution, non-exceedance probability using the optimal sampling rate for research.

IEEE 802-15.4에서 우선순위 IFS를 이용한 확률기반 매체 접근 방법 (The Probability Based Ordered Media Access)

  • 전영호;김정아;박홍성
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.321-323
    • /
    • 2006
  • The IEEE 802.15.4 uses a CSMA/CA algorithm on access of media. The CSMA/CA algorithm does Random Backoff before the data is transmitted to avoid collisions. The random backoff is a kind of unavoidable delays and introduces the side effect of energy consumptions. To cope with those problems we propose a new media access algorithm, the Priority Based Ordered Media Access (PBOMA) algorithm, which uses different IFSs. The PBOMA algorithm uses Sampling Rate and Beacon Interval to get a different access probability(or IFS). The access probability is higher, the IFS is shorter. Note that The transfer of urgent data uses tone signal to transmit it immediately. The proposed algorithm is expected to reduce the energy consumptions and the delay.

  • PDF