• 제목/요약/키워드: Sampling-Based Algorithm

Search Result 480, Processing Time 0.03 seconds

Monte Carlo Estimation of Multivariate Normal Probabilities

  • Oh, Man-Suk;Kim, Seung-Whan
    • Journal of the Korean Statistical Society
    • /
    • v.28 no.4
    • /
    • pp.443-455
    • /
    • 1999
  • A simulation-based approach to estimating the probability of an arbitrary region under a multivariate normal distribution is developed. In specific, the probability is expressed as the ratio of the unrestricted and the restricted multivariate normal density functions, where the restriction is given by the region whose probability is of interest. The density function of the restricted distribution is then estimated by using a sample generated from the Gibbs sampling algorithm.

  • PDF

Hexagon-shape Line Search Algorithm for Fast Motion Estimation on Media Processor (미디어프로세서 상의 고속 움직임 탐색을 위한 Hexagon 모양 라인 탐색 알고리즘)

  • Jung Bong-Soo;Jeon Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.55-65
    • /
    • 2006
  • Most of fast block motion estimation algorithms reported so far in literatures aim to reduce the computation in terms of the number of search points, thus do not fit well with multimedia processors due to their irregular data flow. For multimedia processors, proper reuse of data is more important than reducing number of absolute difference operations because the execution cycle performance strongly depends on the number of off-chip memory access. Therefore, in this paper, we propose a Hexagon-shape line search (HEXSLS) algorithm using line search pattern which can increase data reuse from on-chip local buffer, and check sub-sampling points in line search pattern to reduce unnecessary SAD operation. Our experimental results show that the prediction error (MAE) performance of the proposed HEXSLS is similar to that of the full search block matching algorithm (FSBMA), while compared with the hexagon-based search (HEXBS), the HEXSLS outperforms. Also the proposed HEXSLS requires much lesser off-chip memory access than the conventional fast motion estimation algorithm such as the hexagon-based search (HEXBS) and the predictive line search (PLS). As a result, the proposed HEXSLS algorithm requires smaller number of execution cycles on media processor.

Stereo Matching Algorithm Based on Fast Guided Image Filtering for 3-Dimensional Video Service (3차원 비디오 서비스를 위한 고속 유도 영상 필터링 기반 스테레오 매칭 알고리즘)

  • Hong, Gwang-Soo;Kim, Byung-Gyu
    • Journal of Digital Contents Society
    • /
    • v.17 no.6
    • /
    • pp.523-529
    • /
    • 2016
  • Stereo matching algorithm is an essential part in computer vision and photography. Accuracy and computational complexity are challenges of stereo matching algorithm. Much research has been devoted to stereo matching based on cost volume filtering of matching costs. Local stereo matching based guided image filtering (GIF) has a computational complexity of O(N), but is still not enough to provide real-time 3-dimensional (3-D) video services. The proposed algorithm concentrates reduction of computational complexity using the concept of fast guided image filter, which increase the speed up to $O(N/\small{s}^2)$ with a sub-sampling ratio $\small{s}$. Experimental results indicated that the proposed algorithm achieves effective local stereo matching as well as a fast execution time for 3-D video service.

The Comparison of Parameter Estimation for Nonhomogeneous Poisson Process Software Reliability Model (NHPP 소프트웨어 신뢰도 모형에 대한 모수 추정 비교)

  • Kim, Hee-Cheul;Lee, Sang-Sik;Song, Young-Jae
    • The KIPS Transactions:PartD
    • /
    • v.11D no.6
    • /
    • pp.1269-1276
    • /
    • 2004
  • The Parameter Estimation for software existing reliability models, Goel-Okumoto, Yamada-Ohba-Osaki model was reviewed and Rayleigh model based on Rayleigh distribution was studied. In this paper, we discusses comparison of parameter estimation using maximum likelihood estimator and Bayesian estimation based on Gibbs sampling to analysis of the estimator' pattern. Model selection based on sum of the squared errors and Braun statistic, for the sake of efficient model, was employed. A numerical example was illustrated using real data. The current areas and models of Superposition, mixture for future development are also employed.

Traffic Flow Estimation based Channel Assignment for Wireless Mesh Networks

  • Pak, Woo-Guil;Bahk, Sae-Woong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.1
    • /
    • pp.68-82
    • /
    • 2011
  • Wireless mesh networks (WMNs) provide high-speed backbone networks without any wired cable. Many researchers have tried to increase network throughput by using multi-channel and multi-radio interfaces. A multi-radio multi-channel WMN requires channel assignment algorithm to decide the number of channels needed for each link. Since the channel assignment affects routing and interference directly, it is a critical component for enhancing network performance. However, the optimal channel assignment is known as a NP complete problem. For high performance, most of previous works assign channels in a centralized manner but they are limited in being applied for dynamic network environments. In this paper, we propose a simple flow estimation algorithm and a hybrid channel assignment algorithm. Our flow estimation algorithm obtains aggregated flow rate information between routers by packet sampling, thereby achieving high scalability. Our hybrid channel assignment algorithm initially assigns channels in a centralized manner first, and runs in a distributed manner to adjust channel assignment when notable traffic changes are detected. This approach provides high scalability and high performance compared with existing algorithms, and they are confirmed through extensive performance evaluations.

A Fast Volume Rendering Algorithm for Virtual Endoscopy

  • Ra Jong Beom;Kim Sang Hun;Kwon Sung Min
    • Journal of Biomedical Engineering Research
    • /
    • v.26 no.1
    • /
    • pp.23-30
    • /
    • 2005
  • 3D virtual endoscopy has been used as an alternative non-invasive procedure for visualization of hollow organs. However, due to computational complexity, this is a time-consuming procedure. In this paper, we propose a fast volume rendering algorithm based on perspective ray casting for virtual endoscopy. As a pre-processing step, the algorithm divides a volume into hierarchical blocks and classifies them into opaque or transparent blocks. Then, in the first step, we perform ray casting only for sub-sampled pixels on the image plane, and determine their pixel values and depth information. In the next step, by reducing the sub-sampling factor by half, we repeat ray casting for newly added pixels, and their pixel values and depth information are determined. Here, the previously obtained depth information is utilized to reduce the processing time. This step is recursively performed until a full-size rendering image is acquired. Experiments conducted on a PC show that the proposed algorithm can reduce the rendering time by 70- 80% for bronchus and colon endoscopy, compared with the brute-force ray casting scheme. Using the proposed algorithm, interactive volume rendering becomes more realizable in a PC environment without any specific hardware.

Realtime Wireless Monitoring of Abnormal ST in ECG Using PC Based System

  • Jeong, Gu-Young;Yu, Kee-Ho;Kim, Nam-Gyun;Inooka, Hikaru
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.176-180
    • /
    • 2004
  • The ST-segment that the beginning part of T wave is the important diagnostic parameter to finding myocardial ischemia. Abnormal ST appears in two types. One is the level change, and the other is the pattern change. In this paper, we describe the monitoring of abnormal ST using PC based system. Hardware of this system consists of transmitter, receiver and PC. The function of transmitter is measuring ECG in three channels which are selected manually and transmitting the data to receiver by digital radio way. Connection with receiver and PC is by RS232C, and the data received on the PC is analyzed automatically by ECG analysis algorithm and saved to file. In the algorithm part for detecting abnormal ST, ST-segments are approximated by a polynomial. This method can detect all of the deviation and pattern change of ST-segment regardless the change in the heart rate or sampling rate. To gain algorithm reliability, the method rejects distorted polynomial approximation by calculation the difference between the approximated ST-segment and original ST-segment. In pre-signal processing, the wavelet transformation separates high frequency bands including QRS complex from the original ECG. Consequently, the process improves the performance of detecting each feature points.

  • PDF

유전자 알고리즘을 활용한 데이터 불균형 해소 기법의 조합적 활용

  • Jang, Yeong-Sik;Kim, Jong-U;Heo, Jun
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2007.05a
    • /
    • pp.309-320
    • /
    • 2007
  • The data imbalance problem which can be uncounted in data mining classification problems typically means that there are more or less instances in a class than those in other classes. It causes low prediction accuracy of the minority class because classifiers tend to assign instances to major classes and ignore the minor class to reduce overall misclassification rate. In order to solve the data imbalance problem, there has been proposed a number of techniques based on resampling with replacement, adjusting decision thresholds, and adjusting the cost of the different classes. In this paper, we study the feasibility of the combination usage of the techniques previously proposed to deal with the data imbalance problem, and suggest a combination method using genetic algorithm to find the optimal combination ratio of the techniques. To improve the prediction accuracy of a minority class, we determine the combination ratio based on the F-value of the minority class as the fitness function of genetic algorithm. To compare the performance with those of single techniques and the matrix-style combination of random percentage, we performed experiments using four public datasets which has been generally used to compare the performance of methods for the data imbalance problem. From the results of experiments, we can find the usefulness of the proposed method.

  • PDF

Multihazard capacity optimization of an NPP using a multi-objective genetic algorithm and sampling-based PSA

  • Eujeong Choi;Shinyoung Kwag;Daegi Hahm
    • Nuclear Engineering and Technology
    • /
    • v.56 no.2
    • /
    • pp.644-654
    • /
    • 2024
  • After the Tohoku earthquake and tsunami (Japan, 2011), regulatory efforts to mitigate external hazards have increased both the safety requirements and the total capital cost of nuclear power plants (NPPs). In these circumstances, identifying not only disaster robustness but also cost-effective capacity setting of NPPs has become one of the most important tasks for the nuclear power industry. A few studies have been performed to relocate the seismic capacity of NPPs, yet the effects of multiple hazards have not been accounted for in NPP capacity optimization. The major challenges in extending this problem to the multihazard dimension are (1) the high computational costs for both multihazard risk quantification and system-level optimization and (2) the lack of capital cost databases of NPPs. To resolve these issues, this paper proposes an effective method that identifies the optimal multihazard capacity of NPPs using a multi-objective genetic algorithm and the two-stage direct quantification of fault trees using Monte Carlo simulation method, called the two-stage DQFM. Also, a capacity-based indirect capital cost measure is proposed. Such a proposed method enables NPP to achieve safety and cost-effectiveness against multi-hazard simultaneously within the computationally efficient platform. The proposed multihazard capacity optimization framework is demonstrated and tested with an earthquake-tsunami example.

Development of Unmatched System Model for Iterative Image Reconstruction for Pinhole Collimator of Imaging Systems in Nuclear Medicine (핀홀콜리메이터를 사용한 핵의학영상기기의 순환적 영상 재구성을 위한 비동일 시스템 모델 개발)

  • Bae, Jae-Keon;Bae, Seung-Bin;Lee, Ki-Sung;Kim, Yong-Kwon;Joung, Jin-Hun
    • Journal of radiological science and technology
    • /
    • v.35 no.4
    • /
    • pp.353-360
    • /
    • 2012
  • Diverse designs of collimator have been applied to Single Photon Emission Computed Tomography (SPECT) according to the purpose of acquisition; thus, it is necessary to reflect geometric characteristic of each collimator for successive image reconstruction. This study carry out reconstruction algorithm for imaging system in nuclear medicine with pinhole collimator. Especially, we study to solve sampling problem which caused in the system model of pinhole collimator. System model for a maximum likelihood expectation maximization (MLEM) was developed based on the geometry of the collimator. The projector and back-projector were separately implemented based on the ray-driven and voxel-driven methods, respectively, to overcome sparse sampling problem. We perform phantom study for pinhole collimator by using geant4 application for tomographic emission(GATE) simulation tool. The reconstructed images show promising results. Designed iterative reconstruction algorithm with unmatched system model effective to remove sampling problem artefact. Proposed algorithm can be used not only for pinhole collimator but also for various collimator system of imaging system in nuclear medicine.