• Title/Summary/Keyword: Sampling-Based Algorithm

Search Result 477, Processing Time 0.026 seconds

The Bayesian Analysis for Software Reliability Models Based on NHPP (비동질적 포아송과정을 사용한 소프트웨어 신뢰 성장모형에 대한 베이지안 신뢰성 분석에 관한 연구)

  • Lee, Sang-Sik;Kim, Hee-Cheul;Kim, Yong-Jae
    • The KIPS Transactions:PartD
    • /
    • v.10D no.5
    • /
    • pp.805-812
    • /
    • 2003
  • This paper presents a stochastic model for the software failure phenomenon based on a nonhomogeneous Poisson process (NHPP) and performs Bayesian inference using prior information. The failure process is analyzed to develop a suitable mean value function for the NHPP; expressions are given for several performance measure. The parametric inferences of the model using Logarithmic Poisson model, Crow model and Rayleigh model is discussed. Bayesian computation and model selection using the sum of squared errors. The numerical results of this models are applied to real software failure data. Tools of parameter inference was used method of Gibbs sampling and Metropolis algorithm. The numerical example by T1 data (Musa) was illustrated.

Sparsity Adaptive Expectation Maximization Algorithm for Estimating Channels in MIMO Cooperation systems

  • Zhang, Aihua;Yang, Shouyi;Li, Jianjun;Li, Chunlei;Liu, Zhoufeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3498-3511
    • /
    • 2016
  • We investigate the channel state information (CSI) in multi-input multi-output (MIMO) cooperative networks that employ the amplify-and-forward transmission scheme. Least squares and expectation conditional maximization have been proposed in the system. However, neither of these two approaches takes advantage of channel sparsity, and they cause estimation performance loss. Unlike linear channel estimation methods, several compressed channel estimation methods are proposed in this study to exploit the sparsity of the MIMO cooperative channels based on the theory of compressed sensing. First, the channel estimation problem is formulated as a compressed sensing problem by using sparse decomposition theory. Second, the lower bound is derived for the estimation, and the MIMO relay channel is reconstructed via compressive sampling matching pursuit algorithms. Finally, based on this model, we propose a novel algorithm so called sparsity adaptive expectation maximization (SAEM) by using Kalman filter and expectation maximization algorithm so that it can exploit channel sparsity alternatively and also track the true support set of time-varying channel. Kalman filter is used to provide soft information of transmitted signals to the EM-based algorithm. Various numerical simulation results indicate that the proposed sparse channel estimation technique outperforms the previous estimation schemes.

A Multiple Branching Algorithm of Contour Triangulation by Cascading Double Branching Method (이중분기 확장을 통한 등치선 삼각화의 다중분기 알고리즘)

  • Choi, Young-Kyu
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.2
    • /
    • pp.123-134
    • /
    • 2000
  • This paper addresses a new triangulation method for constructing surface model from a set of wire-frame contours. The most important problem of contour triangulation is the branching problem, and we provide a new solution for the double branching problem, which occurs frequently in real data. The multiple branching problem is treated as a set of double branchings and an algorithm based on contour merging is developed. Our double branching algorithm is based on partitioning of root contour by Toussiant's polygon triangulation algorithml[14]. Our double branching algorithm produces quite natural surface model even if the branch contours are very complicate in shape. We treat the multiple branching problem as a problem of coarse section sampling in z-direction, and provide a new multiple branching algorithm which iteratively merge a pair of branch contours using imaginary interpolating contours. Our method is a natural and systematic solution for the general branching problem of contour triangulation. The result shows that our method works well even though there are many complicated branches in the object.

  • PDF

Lamb wave-based damage imaging method for damage detection of rectangular composite plates

  • Qiao, Pizhong;Fan, Wei
    • Structural Monitoring and Maintenance
    • /
    • v.1 no.4
    • /
    • pp.411-425
    • /
    • 2014
  • A relatively low frequency Lamb wave-based damage identification method called damage imaging method for rectangular composite plate is presented. A damage index (DI) is generated from the delay matrix of the Lamb wave response signals, and it is used to indicate the location and approximate area of the damage. The viability of this method is demonstrated by analyzing the numerical and experimental Lamb wave response signals from rectangular composite plates. The technique only requires the response signals from the plate after damage, and it is capable of performing near real time damage identification. This study sheds some light on the application of Lamb wave-based damage detection algorithm for plate-type structures by using the relatively low frequency (e.g., in the neighborhood of 100 kHz, more suitable for the best capability of the existing fiber optic sensor interrogator system with the sampling frequency of 500 kHz) Lamb wave response and a reference-free damage detection technique.

Clustering Algorithm by Grid-based Sampling

  • Park, Hee-Chang;Ryu, Jee-Hyun
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2003.05a
    • /
    • pp.97-108
    • /
    • 2003
  • Cluster analysis has been widely used in many applications, such that pattern analysis or recognition, data analysis, image processing, market research on on-line or off-line and so on. Clustering can identify dense and sparse regions among data attributes or object attributes. But it requires many hours to get clusters that we want, because of clustering is more primitive, explorative and we make many data an object of cluster analysis. In this paper we propose a new method of clustering using sample based on grid. It is more fast than any traditional clustering method and maintains its accuracy. It reduces running time by using grid-based sample. And other clustering applications can be more effective by using this methods with its original methods.

  • PDF

Scene-Based Video Watermarking Using Temporal Spread Spectrum in Com pressed Domain (압축 영역에서 시간축 확산 스펙트럼을 이용한 장면단위의 비디오 워터마킹)

  • 최윤희;강경표;최태선
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.93-96
    • /
    • 2002
  • This paper presents robust and efficient scene-based video watermarking method using visual rhythm (spatio-temporal slice) in compressed domain. Scene change can be detected easily using visual rhythm and video sequences are conveniently edited at the scene boundaries. Therefore, scene-based watermark embedding Process it a natural choice. Temporal spread spectrum can be achieved by applying spread spectrum methods to visual rhythm. Additive Gaussian noise, low-pass filtering, median filtering and histogram equalization attack are simulated for all frames. Frame sub-sampling is also simulated as a typical video attack Simulation results show that proposed algorithm is robust and efficient in the presence of such kind of attacks.

  • PDF

A Robust Optimization Using the Statistics Based on Kriging Metamodel

  • Lee Kwon-Hee;Kang Dong-Heon
    • Journal of Mechanical Science and Technology
    • /
    • v.20 no.8
    • /
    • pp.1169-1182
    • /
    • 2006
  • Robust design technology has been applied to versatile engineering problems to ensure consistency in product performance. Since 1980s, the concept of robust design has been introduced to numerical optimization field, which is called the robust optimization. The robustness in the robust optimization is determined by a measure of insensitiveness with respect to the variation of a response. However, there are significant difficulties associated with the calculation of variations represented as its mean and variance. To overcome the current limitation, this research presents an implementation of the approximate statistical moment method based on kriging metamodel. Two sampling methods are simultaneously utilized to obtain the sequential surrogate model of a response. The statistics such as mean and variance are obtained based on the reliable kriging model and the second-order statistical approximation method. Then, the simulated annealing algorithm of global optimization methods is adopted to find the global robust optimum. The mathematical problem and the two-bar design problem are investigated to show the validity of the proposed method.

An Algorithm for Optimized Accuracy Calculation of Hull Block Assembly (선박 블록 조립 후 최적 정도 계산을 위한 알고리즘 연구)

  • Noh, Jac-Kyou
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.19 no.5
    • /
    • pp.552-560
    • /
    • 2013
  • In this paper, an optimization algorithm for the block assembly accuracy control assessment is proposed with consideration for the current block assembly process and accuracy control procedure used in the shipbuilding site. The objective function of the proposed algorithm consists of root mean square error of the distances between design and measured data of the other control points with respect to a specific point of the whole control points. The control points are divided into two groups: points on the control line and the other points. The grouped data are used as criteria for determining the combination of 6 degrees of freedom in the registration process when constituting constraints and calculating objective function. The optimization algorithm is developed by using combination of the sampling method and the point to point relation based modified ICP algorithm which has an allowable error check procedure that makes sure that error between design and measured point is under allowable error. According to the results from the application of the proposed algorithm with the design and measured data of two blocks data which are verified and validated by an expert in the shipbuilding site, it implies that the choice of whole control points as target points for the accuracy calculation shows better results than that of the control points on the control line as target points for the accuracy of the calculation and the best optimized result can be acquired from the accuracy calculation with a fixed point on the control line as the reference point of the registration.

Large Scale Protein Side-chain Packing Based on Maximum Edge-weight Clique Finding Algorithm

  • K.C., Dukka Bahadur;Brown, J.B.;Tomita, Etsuji;Suzuki, Jun'ichi;Akutsu, Tatsuya
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.228-233
    • /
    • 2005
  • The protein side-chain packing problem (SCPP) is known to be NP-complete. Various graph theoretic based side-chain packing algorithms have been proposed. However as the size of the protein becomes larger, the sampling space increases exponentially. Hence, one approach to cope with the time complexity is to decompose the graph of the protein into smaller subgraphs. Some existing approaches decompose the graph into biconnected components at an articulation point (resulting in an at-most 21-residue subgraph) or solve the SCPP by tree decomposition (4-, 5-residue subgraph). In this regard, we had also presented a deterministic based approach called as SPWCQ using the notion of maximum edge weight clique in which we reduce SCPP to a graph and then obtain the maximum edge-weight clique of the obtained graph. This algorithm performs well for a protein of less than 500 residues. However, it fails to produce a feasible solution for larger proteins because of the size of the search space. In this paper, we present a new heuristic approach for the side-chain packing problem based on the maximum edge-weight clique finding algorithm that enables us to compute the side-chain packing of much larger proteins. Our new approach can compute side-chain packing of a protein of 874 residues with an RMSD of 1.423${\AA}$.

  • PDF

Implementation of Improved Shape Descriptor based on Size Function (Size Function에 기반한 개선된 모양 표기자 구현)

  • 임헌선;안광일;안재형
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.3
    • /
    • pp.215-221
    • /
    • 2001
  • In this paper, we propose a algorithm that apply different weight-sampling values according to the directions of the contour to reduce errors that can arise in extracting the feature of an contoured object. Especially, it 8is designed to have invariant property under the circumstances like the rotation, transition and scaling. The output matrix of feature set is made through the size function of the proposed algorithm, and the euclidean distance between the output matrix and that of the original image is calculated. Experimental result shows that the proposed algorithm reduces the euclidean distance between the original image and the changed image-by 57% in rotation and by 91% in scaling.

  • PDF