• Title/Summary/Keyword: zero-truncation

Search Result 11, Processing Time 0.025 seconds

FAST BDD TRUNCATION METHOD FOR EFFICIENT TOP EVENT PROBABILITY CALCULATION

  • Jung, Woo-Sik;Han, Sang-Hoon;Yang, Joon-Eon
    • Nuclear Engineering and Technology
    • /
    • v.40 no.7
    • /
    • pp.571-580
    • /
    • 2008
  • A Binary Decision Diagram (BDD) is a graph-based data structure that calculates an exact top event probability (TEP). It has been a very difficult task to develop an efficient BDD algorithm that can solve a large problem since it is highly memory consuming. In order to solve a large reliability problem within limited computational resources, many attempts have been made, such as static and dynamic variable ordering schemes, to minimize BDD size. Additional effort was the development of a ZBDD (Zero-suppressed BDD) algorithm to calculate an approximate TEP. The present method is the first successful application of a BDD truncation. The new method is an efficient method to maintain a small BDD size by a BDD truncation during a BDD calculation. The benchmark tests demonstrate the efficiency of the developed method. The TEP rapidly converges to an exact value according to a lowered truncation limit.

Fast Cardiac CINE MRI by Iterative Truncation of Small Transformed Coefficients

  • Park, Jinho;Hong, Hye-Jin;Yang, Young-Joong;Ahn, Chang-Beom
    • Investigative Magnetic Resonance Imaging
    • /
    • v.19 no.1
    • /
    • pp.19-30
    • /
    • 2015
  • Purpose: A new compressed sensing technique by iterative truncation of small transformed coefficients (ITSC) is proposed for fast cardiac CINE MRI. Materials and Methods: The proposed reconstruction is composed of two processes: truncation of the small transformed coefficients in the r-f domain, and restoration of the measured data in the k-t domain. The two processes are sequentially applied iteratively until the reconstructed images converge, with the assumption that the cardiac CINE images are inherently sparse in the r-f domain. A novel sampling strategy to reduce the normalized mean square error of the reconstructed images is proposed. Results: The technique shows the least normalized mean square error among the four methods under comparison (zero filling, view sharing, k-t FOCUSS, and ITSC). Application of ITSC for multi-slice cardiac CINE imaging was tested with the number of slices of 2 to 8 in a single breath-hold, to demonstrate the clinical usefulness of the technique. Conclusion: Reconstructed images with the compression factors of 3-4 appear very close to the images without compression. Furthermore the proposed algorithm is computationally efficient and is stable without using matrix inversion during the reconstruction.

Model Reduction Considering Both Resonances and Antiresonances (공진과 반공진 특성을 동시고려한 모델 축소)

  • 허진석;이시복;이창일
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2001.05a
    • /
    • pp.985-990
    • /
    • 2001
  • The Frequency Response Function(FRF)s of FE Model reduced by SEREP methods accurately estimate the full model at resonance frequencies, However these FRFs are not accurate at antiresonance frequencies, Additionally, the truncation errors may he significant in the reduction mode1. So this paper considers the possibility of SERFP method through a numerical method to preserve dynamic behavior at antiresonance and appliers the static or dynamic compensation methods for truncation errors to the reduction model. This compensated reduction model is redesigned for pole-zero cancellation methods the objective of reducing a resonance frequency.

  • PDF

The Effects of Dispersion Parameters and Test for Equality of Dispersion Parameters in Zero-Truncated Bivariate Generalized Poisson Models (제로절단된 이변량 일반화 포아송 분포에서 산포모수의 효과 및 산포의 동일성에 대한 검정)

  • Lee, Dong-Hee;Jung, Byoung-Cheol
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.3
    • /
    • pp.585-594
    • /
    • 2010
  • This study, investigates the effects of dispersion parameters between two response variables in zero-truncated bivariate generalized Poisson distributions. A Monte Carlo study shows that the zero-truncated bivariate Poisson and negative binomial models fit poorly wherein the zero-truncated bivariate count data has heterogeneous dispersion parameters on dependent variables. In addition, we derive the score test for testing the equality of the dispersion parameters and compare its efficiency with the likelihood ratio test.

A Truncated Booth Multiplier Architecture for Low Power Design (저전력 설계를 위한 전달된 Booth 곱셈기 구조)

  • Lee, Kwang-Hyun;Park, Chong-Suck
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.37 no.9
    • /
    • pp.55-65
    • /
    • 2000
  • In this paper, we propose a hardware reduced multiplier for DSP applications. In many DSP applications, all of multiplier products were not used, but only upper bits of product were used. Kidambi proposed truncated unsigned multiplier for this idea. in this paper, we adopt this scheme to Booth multiplier which can be used real DSP systems. Also, zero input guarantees zero output that was not provided in previous paper. In addition, we propose bit extension scheme to reduce truncation error more and more. And, we adopted this multiplier to FIR filters for more efficient design.

  • PDF

온라인 목록 검색 행태에 관한 연구-LINNET 시스템의 Transaction log 분석을 중심으로-

  • 윤구호;심병규
    • Journal of Korean Library and Information Science Society
    • /
    • v.21
    • /
    • pp.253-289
    • /
    • 1994
  • The purpose of this study is about the search pattern of LINNET (Library Information Network System) OPAC users by transaction log, maintained by POSTECH(Pohang University of Science and Technology) Central Library, to provide feedback information of OPAC system design. The results of this study are as follows. First, for the period of this analysis, there were totally 11, 218 log-ins, 40, 627 transaction logs and 3.62 retrievals per a log-in. Title keyword was the most frequently used, but accession number, bibliographic control number or call number was very infrequently used. Second, 47.02% of OPAC, searches resulted in zero retrievals. Bibliographic control number was the least successful search. User displayed 2.01% full information and 64.27% local information per full information. Third, special or advanced retrieval features are very infrequently used. Only 22.67% of the searches used right truncation and 0.71% used the qualifier. Only 1 boolean operator was used in every 22 retrievals. The most frequently used operator is 'and (&)' with title keywords. But 'bibliographical control number (N) and accessionnumber (R) are not used at all with any operators. The causes of search failure are as follows. 1. The item was not used in the database. (15, 764 times : 79.42%). 2. The wrong search key was used. (3, 761 times : 18.95%) 3. The senseless string (garbage) was entered. (324 times : 1.63%) On the basis of these results, some recommendations are suggested to improve the search success rate as follows. First, a n.0, ppropriate user education and online help function let users retrieve LINNET OPAC more efficiently. Second, several corrections of retrieval software will decrease the search failure rate. Third, system offers right truncation by default to every search term. This methods will increase success rate but should considered carefully. By a n.0, pplying this method, the number of hit can be overnumbered, and system overhead can be occurred. Fourth, system offers special boolean operator by default to every keyword retrieval when user enters more than two words at a time. Fifth, system assists searchers to overcome the wrong typing of selecting key by automatic korean/english mode change.

  • PDF

A New Tailored Sinc Pulse and Its Use for Multiband Pulse Design

  • Park, Jinil;Park, Jang-Yeon
    • Investigative Magnetic Resonance Imaging
    • /
    • v.20 no.1
    • /
    • pp.27-35
    • /
    • 2016
  • Purpose: Among RF pulses, a sinc pulse is typically used for slice selection due to its frequency-selective feature. When a sinc pulse is implemented in practice, it needs to be apodized to avoid truncation artifacts at the expense of broadening the transition region of the excited-band profile. Here a sinc pulse tailored by a new apodization function is proposed that produces a sharper transition region with well suppression of truncation artifacts in comparison with conventional tailored sinc pulses. A multiband pulse designed using this newly apodized sinc pulse is also suggested inheriting the better performance of the newly apodized sinc pulse. Materials and Methods: A new apodization function is introduced to taper a sinc pulse, playing a role to slightly shift the first zero-crossing of a tailored sinc pulse from the peak of the main lobe and thereby producing a narrower bandwidth as well as a sharper pass-band in the excitation profile. The newly apodized sinc pulse was also utilized to design a multiband pulse which inherits the performance of its constituent. Performances of the proposed sinc pulse and the multiband pulse generated with it were demonstrated by Bloch simulation and phantom imaging. Results: In both simulations and experiments, the newly apodized sinc pulse yielded a narrower bandwidth and a sharper transition of the pass-band profile with a desirable degree of side-lobe suppression than the commonly used Hanning-windowed sinc pulse. The multiband pulse designed using the newly apodized sinc pulse also showed the better performance in multi-slice excitation than the one designed with the Hanning-windowed sinc pulse. Conclusion: The new tailored sinc pulse proposed here provides a better performance in slice (or slab) selection than conventional tailored sinc pulses. Thanks to the availability of analytical expression, it can also be utilized for multiband pulse design with great flexibility and readiness in implementation, transferring its better performance.

How to incorporate human failure event recovery into minimal cut set generation stage for efficient probabilistic safety assessments of nuclear power plants

  • Jung, Woo Sik;Park, Seong Kyu;Weglian, John E.;Riley, Jeff
    • Nuclear Engineering and Technology
    • /
    • v.54 no.1
    • /
    • pp.110-116
    • /
    • 2022
  • Human failure event (HFE) dependency analysis is a part of human reliability analysis (HRA). For efficient HFE dependency analysis, a maximum number of minimal cut sets (MCSs) that have HFE combinations are generated from the fault trees for the probabilistic safety assessment (PSA) of nuclear power plants (NPPs). After collecting potential HFE combinations, dependency levels of subsequent HFEs on the preceding HFEs in each MCS are analyzed and assigned as conditional probabilities. Then, HFE recovery is performed to reflect these conditional probabilities in MCSs by modifying MCSs. Inappropriate HFE dependency analysis and HFE recovery might lead to an inaccurate core damage frequency (CDF). Using the above process, HFE recovery is performed on MCSs that are generated with a non-zero truncation limit, where many MCSs that have HFE combinations are truncated. As a result, the resultant CDF might be underestimated. In this paper, a new method is suggested to incorporate HFE recovery into the MCS generation stage. Compared to the current approach with a separate HFE recovery after MCS generation, this new method can (1) reduce the total time and burden for MCS generation and HFE recovery, (2) prevent the truncation of MCSs that have dependent HFEs, and (3) avoid CDF underestimation. This new method is a simple but very effective means of performing MCS generation and HFE recovery simultaneously and improving CDF accuracy. The effectiveness and strength of the new method are clearly demonstrated and discussed with fault trees and HFE combinations that have joint probabilities.

Linear Unequal Error Protection Codes based on Terminated Convolutional Codes

  • Bredtmann, Oliver;Czylwik, Andreas
    • Journal of Communications and Networks
    • /
    • v.17 no.1
    • /
    • pp.12-20
    • /
    • 2015
  • Convolutional codes which are terminated by direct truncation (DT) and zero tail termination provide unequal error protection. When DT terminated convolutional codes are used to encode short messages, they have interesting error protection properties. Such codes match the significance of the output bits of common quantizers and therefore lead to a low mean square error (MSE) when they are used to encode quantizer outputs which are transmitted via a noisy digital communication system. A code construction method that allows adapting the code to the channel is introduced, which is based on time-varying convolutional codes. We can show by simulations that DT terminated convolutional codes lead to a lower MSE than standard block codes for all channel conditions. Furthermore, we develop an MSE approximation which is based on an upper bound on the error probability per information bit. By means of this MSE approximation, we compare the convolutional codes to linear unequal error protection code construction methods from the literature for code dimensions which are relevant in analog to digital conversion systems. In numerous situations, the DT terminated convolutional codes have the lowest MSE among all codes.

A reversible data hiding scheme in JPEG bitstreams using DCT coefficients truncation

  • Zhang, Mingming;Zhou, Quan;Hu, Yanlang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.1
    • /
    • pp.404-421
    • /
    • 2020
  • A reversible data hiding scheme in JPEG compressed bitstreams is proposed, which could avoid decoding failure and file expansion by means of removing of bitstreams corresponding to high frequency coefficients and embedding of secret data in file header as comment part. We decode original JPEG images to quantified 8×8 DCT blocks, and search for a high frequency as an optimal termination point, beyond which the coefficients are set to zero. These blocks are separated into two parts so that termination point in the latter part is slightly smaller to make the whole blocks available in substitution. Then spare space is reserved to insert secret data after comment marker so that data extraction is independent of recovery in receiver. Marked images can be displayed normally such that it is difficult to distinguish deviation by human eyes. Termination point is adaptive for variation in secret size. A secret size below 500 bits produces a negligible distortion and a PSNR of approximately 50 dB, while PSNR is also mostly larger than 30 dB for a secret size up to 25000 bits. The experimental results show that the proposed technique exhibits significant advantages in computational complexity and preservation of file size for small hiding capacity, compared to previous methods.