• 제목/요약/키워드: Sampling Based Method

Search Result 1,700, Processing Time 0.025 seconds

A Probabilistic Sampling Method for Efficient Flow-based Analysis

  • Jadidi, Zahra;Muthukkumarasamy, Vallipuram;Sithirasenan, Elankayer;Singh, Kalvinder
    • Journal of Communications and Networks
    • /
    • v.18 no.5
    • /
    • pp.818-825
    • /
    • 2016
  • Network management and anomaly detection are challenges in high-speed networks due to the high volume of packets that has to be analysed. Flow-based analysis is a scalable method which reduces the high volume of network traffic by dividing it into flows. As sampling methods are extensively used in flow generators such as NetFlow, the impact of sampling on the performance of flow-based analysis needs to be investigated. Monitoring using sampled traffic is a well-studied research area, however, the impact of sampling on flow-based anomaly detection is a poorly researched area. This paper investigates flow sampling methods and shows that these methods have negative impact on flow-based anomaly detection. Therefore, we propose an efficient probabilistic flow sampling method that can preserve flow traffic distribution. The proposed sampling method takes into account two flow features: Destination IP address and octet. The destination IP addresses are sampled based on the number of received bytes. Our method provides efficient sampled traffic which has the required traffic features for both flow-based anomaly detection and monitoring. The proposed sampling method is evaluated using a number of generated flow-based datasets. The results show improvement in preserved malicious flows.

A new structural reliability analysis method based on PC-Kriging and adaptive sampling region

  • Yu, Zhenliang;Sun, Zhili;Guo, Fanyi;Cao, Runan;Wang, Jian
    • Structural Engineering and Mechanics
    • /
    • v.82 no.3
    • /
    • pp.271-282
    • /
    • 2022
  • The active learning surrogate model based on adaptive sampling strategy is increasingly popular in reliability analysis. However, most of the existing sampling strategies adopt the trial and error method to determine the size of the Monte Carlo (MC) candidate sample pool which satisfies the requirement of variation coefficient of failure probability. It will lead to a reduction in the calculation efficiency of reliability analysis. To avoid this defect, a new method for determining the optimal size of the MC candidate sample pool is proposed, and a new structural reliability analysis method combining polynomial chaos-based Kriging model (PC-Kriging) with adaptive sampling region is also proposed (PCK-ASR). Firstly, based on the lower limit of the confidence interval, a new method for estimating the optimal size of the MC candidate sample pool is proposed. Secondly, based on the upper limit of the confidence interval, an adaptive sampling region strategy similar to the radial centralized sampling method is developed. Then, the k-means++ clustering technique and the learning function LIF are used to complete the adaptive design of experiments (DoE). Finally, the effectiveness and accuracy of the PCK-ASR method are verified by three numerical examples and one practical engineering example.

A Comparison of Collection Concentrations Based on Airborne Toluene Diisocyanates Measurement Methods (공기 중 Toluene diisocyanates 측정방법에 따른 포집농도 비교)

  • Park, Hyung-Sung;Won, Jong-Uk;Kim, Chi-Nyon;Roh, Jaehoon
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.23 no.4
    • /
    • pp.341-347
    • /
    • 2013
  • Objectives: The aim of this study is to investigate the differences in airborne TDI concentrations based on the filter collection method and liquid collection method and to compare airborne TDIs concentrations by sampling method change when using the filter collection method in the spray-painting process. Methods: For the sample measurement, the filter collection method(OSHA#42) and liquid collection method(NIOSH#5522) were used; for the sampling method, the full-period single sampling and full-period consecutive sampling methods were used. The samples were collected in spray-painting and drying process locations. Results: In all samples collected from the spray-painting and drying process locations through the filter collection and liquid collection methods, greater amounts of 2,6-TDI than 2,4-TDI were detected. When the TDI collection concentrations based on the sampling methods were compared, the concentrations of 2,4-TDI and 2,6-TDI collected by the consecutive sampling method were higher than the concentrations of 2,4-TDI and 2,6-TDI collected by the single sampling method for both the filter collection method and liquid collection method used in the spray-painting process. These differences were statistically significant. Conclusions: When TDI collection concentrations based on the sample measurement methods were compared, the concentration of 2,4-TDI and 2,6-TDI collected through the liquid collection method were higher than the concentrations of 2,4-TDI and 2,6-TDI collected by the filter collection method, and the differences were statistically significant. In the drying process, no difference was shown in the collection concentrations of 2,4-TDI and 2,6-TDI with the two measurement methods.

Optimal SVM learning method based on adaptive sparse sampling and granularity shift factor

  • Wen, Hui;Jia, Dongshun;Liu, Zhiqiang;Xu, Hang;Hao, Guangtao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.4
    • /
    • pp.1110-1127
    • /
    • 2022
  • To improve the training efficiency and generalization performance of a support vector machine (SVM) in a large-scale set, an optimal SVM learning method based on adaptive sparse sampling and the granularity shift factor is presented. The proposed method combines sampling optimization with learner optimization. First, an adaptive sparse sampling method based on the potential function density clustering is designed to adaptively obtain sparse sampling samples, which can achieve a reduction in the training sample set and effectively approximate the spatial structure distribution of the original sample set. A granularity shift factor method is then constructed to optimize the SVM decision hyperplane, which fully considers the neighborhood information of each granularity region in the sparse sampling set. Experiments on an artificial dataset and three benchmark datasets show that the proposed method can achieve a relatively higher training efficiency, as well as ensure a good generalization performance of the learner. Finally, the effectiveness of the proposed method is verified.

Low-discrepancy sampling for structural reliability sensitivity analysis

  • Cao, Zhenggang;Dai, Hongzhe;Wang, Wei
    • Structural Engineering and Mechanics
    • /
    • v.38 no.1
    • /
    • pp.125-140
    • /
    • 2011
  • This study presents an innovative method to estimate the reliability sensitivity based on the low-discrepancy sampling which is a new technique for structural reliability analysis. Two advantages are contributed to the method: one is that, by developing a general importance sampling procedure for reliability sensitivity analysis, the partial derivative of the failure probability with respect to the distribution parameter can be directly obtained with typically insignificant additional computations on the basis of structural reliability analysis; and the other is that, by combining various low-discrepancy sequences with the above importance sampling procedure, the proposed method is far more efficient than that based on the classical Monte Carlo method in estimating reliability sensitivity, especially for problems of small failure probability or problems that require a large number of costly finite element analyses. Examples involving both numerical and structural problems illustrate the application and effectiveness of the method developed, which indicate that the proposed method can provide accurate and computationally efficient estimates of reliability sensitivity.

Structural reliability estimation based on quasi ideal importance sampling simulation

  • Yonezawa, Masaaki;Okuda, Shoya;Kobayashi, Hiroaki
    • Structural Engineering and Mechanics
    • /
    • v.32 no.1
    • /
    • pp.55-69
    • /
    • 2009
  • A quasi ideal importance sampling simulation method combined in the conditional expectation is proposed for the structural reliability estimation. The quasi ideal importance sampling joint probability density function (p.d.f.) is so composed on the basis of the ideal importance sampling concept as to be proportional to the conditional failure probability multiplied by the p.d.f. of the sampling variables. The respective marginal p.d.f.s of the ideal importance sampling joint p.d.f. are determined numerically by the simulations and partly by the piecewise integrations. The quasi ideal importance sampling simulations combined in the conditional expectation are executed to estimate the failure probabilities of structures with multiple failure surfaces and it is shown that the proposed method gives accurate estimations efficiently.

An Estimator of Population Mean Based on Balanced Systematic Sampling When Both the Sample Size and the Reciprocal of the Sampling Fraction are Odd Numbers

  • Kim, Hyuk-Joo
    • Communications for Statistical Applications and Methods
    • /
    • v.14 no.3
    • /
    • pp.667-677
    • /
    • 2007
  • In this paper, we propose a method for estimating the mean of a population which has a linear trend, when both n, the sample size, and k, the reciprocal of the sampling fraction, are odd numbers. The proposed method, not having the drawbacks of centered systematic sampling, centered modified sampling and centered balanced sampling, consists of selecting a sample by balanced systematic sampling and estimating the population mean by using interpolation. We compare the efficiency of the proposed method and existing methods under the criterion of the expected mean square error based on the infinite superpopulation model.

Reliability-Based Design Optimization Using Kriging Metamodel with Sequential Sampling Technique (순차적 샘플링과 크리깅 메타모델을 이용한 신뢰도 기반 최적설계)

  • Choi, Kyu-Seon;Lee, Gab-Seong;Choi, Dong-Hoon
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.33 no.12
    • /
    • pp.1464-1470
    • /
    • 2009
  • RBDO approach based on a sampling method with the Kriging metamodel and Constraint Boundary Sampling (CBS), which is sequential sampling method to generate metamodels is proposed. The major advantage of the proposed RBDO approach is that it does not require Most Probable failure Point (MPP) which is essential for First-Order Reliability Method (FORM)-based RBDO approach. The Monte Carlo Sampling (MCS), most well-known method of the sampling methods for the reliability analysis is used to assess the reliability of constraints. In addition, a Cumulative Distribution Function (CDF) of the constraints is approximated using Moving Least Square (MLS) method from empirical distribution function. It is possible to acquire a probability of failure and its analytic sensitivities by using an approximate function of the CDF for the constraints. Moreover, a concept of inactive design is adapted to improve a numerical efficiency of the proposed approach. Computational accuracy and efficiency of the proposed RBDO approach are demonstrated by numerical and engineering problems.

Global sensitivity analysis improvement of rotor-bearing system based on the Genetic Based Latine Hypercube Sampling (GBLHS) method

  • Fatehi, Mohammad Reza;Ghanbarzadeh, Afshin;Moradi, Shapour;Hajnayeb, Ali
    • Structural Engineering and Mechanics
    • /
    • v.68 no.5
    • /
    • pp.549-561
    • /
    • 2018
  • Sobol method is applied as a powerful variance decomposition technique in the field of global sensitivity analysis (GSA). The paper is devoted to increase convergence speed of the extracted Sobol indices using a new proposed sampling technique called genetic based Latine hypercube sampling (GBLHS). This technique is indeed an improved version of restricted Latine hypercube sampling (LHS) and the optimization algorithm is inspired from genetic algorithm in a new approach. The new approach is based on the optimization of minimax value of LHS arrays using manipulation of array indices as chromosomes in genetic algorithm. The improved Sobol method is implemented to perform factor prioritization and fixing of an uncertain comprehensive high speed rotor-bearing system. The finite element method is employed for rotor-bearing modeling by considering Eshleman-Eubanks assumption and interaction of axial force on the rotor whirling behavior. The performance of the GBLHS technique are compared with the Monte Carlo Simulation (MCS), LHS and Optimized LHS (Minimax. criteria). Comparison of the GBLHS with other techniques demonstrates its capability for increasing convergence speed of the sensitivity indices and improving computational time of the GSA.

Bootstrap Confidence Intervals for a One Parameter Model using Multinomial Sampling

  • Jeong, Hyeong-Chul;Kim, Dae-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • v.10 no.2
    • /
    • pp.465-472
    • /
    • 1999
  • We considered a bootstrap method for constructing confidenc intervals for a one parameter model using multinomial sampling. The convergence rates or the proposed bootstrap method are calculated for model-based maximum likelihood estimators(MLE) using multinomial sampling. Monte Carlo simulation was used to compare the performance of bootstrap methods with normal approximations in terms of the average coverage probability criterion.

  • PDF