• Title/Summary/Keyword: Check sum

Search Result 126, Processing Time 0.031 seconds

Combined Normalized and Offset Min-Sum Algorithm for Low-Density Parity-Check Codes (LDPC 부호의 복호를 위한 정규화와 오프셋이 조합된 최소-합 알고리즘)

  • Lee, Hee-ran;Yun, In-Woo;Kim, Joon Tae
    • Journal of Broadcast Engineering
    • /
    • v.25 no.1
    • /
    • pp.36-47
    • /
    • 2020
  • The improved belief-propagation-based algorithms, such as normalized min-sum algorithm (NMSA) or offset min-sum algorithm (OMSA), are widely used to decode LDPC(Low-Density Parity-Check) codes because they are less computationally complex and work well even at low SNR(Signal-to-Noise Ratio). However, these algorithms work well only when an appropriate normalization factor or offset value is used. A new method that uses a CMD(Check Node Message Distribution) chart and least-square method, which has been recently proposed, has advantages on computational complexity over other approaches to get optimal coefficients. Furthermore, this method can be used to derive coefficients for each iteration. In this paper, we apply this method and propose an algorithm to derive a combination of normalization factor and offset value for a combined normalized and offset min-sum algorithm to further improve the decoding of LDPC codes. Simulations on the next-generation broadcasting standards, ATSC 3.0 LDPC codes, prove that a combined normalized and offset min-sum algorithm which takes the proposed coefficients as correction coefficients shows the best BER performance among other decoding algorithms.

New Min-sum LDPC Decoding Algorithm Using SNR-Considered Adaptive Scaling Factors

  • Jung, Yongmin;Jung, Yunho;Lee, Seongjoo;Kim, Jaeseok
    • ETRI Journal
    • /
    • v.36 no.4
    • /
    • pp.591-598
    • /
    • 2014
  • This paper proposes a new min-sum algorithm for low-density parity-check decoding. In this paper, we first define the negative and positive effects of the received signal-to-noise ratio (SNR) in the min-sum decoding algorithm. To improve the performance of error correction by considering the negative and positive effects of the received SNR, the proposed algorithm applies adaptive scaling factors not only to extrinsic information but also to a received log-likelihood ratio. We also propose a combined variable and check node architecture to realize the proposed algorithm with low complexity. The simulation results show that the proposed algorithm achieves up to 0.4 dB coding gain with low complexity compared to existing min-sum-based algorithms.

Selection-based Low-cost Check Node Operation for Extended Min-Sum Algorithm

  • Park, Kyeongbin;Chung, Ki-Seok
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.2
    • /
    • pp.485-499
    • /
    • 2021
  • Although non-binary low-density parity-check (NB-LDPC) codes have better error-correction capability than that of binary LDPC codes, their decoding complexity is significantly higher. Therefore, it is crucial to reduce the decoding complexity of NB-LDPC while maintaining their error-correction capability to adopt them for various applications. The extended min-sum (EMS) algorithm is widely used for decoding NB-LDPC codes, and it reduces the complexity of check node (CN) operations via message truncation. Herein, we propose a low-cost CN processing method to reduce the complexity of CN operations, which take most of the decoding time. Unlike existing studies on low complexity CN operations, the proposed method employs quick selection algorithm, thereby reducing the hardware complexity and CN operation time. The experimental results show that the proposed selection-based CN operation is more than three times faster and achieves better error-correction performance than the conventional EMS algorithm.

Simplified 2-Dimensional Scaled Min-Sum Algorithm for LDPC Decoder

  • Cho, Keol;Lee, Wang-Heon;Chung, Ki-Seok
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.3
    • /
    • pp.1262-1270
    • /
    • 2017
  • Among various decoding algorithms of low-density parity-check (LDPC) codes, the min-sum (MS) algorithm and its modified algorithms are widely adopted because of their computational simplicity compared to the sum-product (SP) algorithm with slight loss of decoding performance. In the MS algorithm, the magnitude of the output message from a check node (CN) processing unit is decided by either the smallest or the next smallest input message which are denoted as min1 and min2, respectively. It has been shown that multiplying a scaling factor to the output of CN message will improve the decoding performance. Further, Zhong et al. have shown that multiplying different scaling factors (called a 2-dimensional scaling) to min1 and min2 much increases the performance of the LDPC decoder. In this paper, the simplified 2-dimensional scaled (S2DS) MS algorithm is proposed. In the proposed algorithm, we figure out a pair of the most efficient scaling factors which multiplications can be replaced with combinations of addition and shift operations. Furthermore, one scaling operation is approximated by the difference between min1 and min2. The simulation results show that S2DS achieves the error correcting performance which is close to or outperforms the SP algorithm regardless of coding rates, and its computational complexity is the lowest comparing to modified versions of MS algorithms.

Self-Adaptive Termination Check of Min-Sum Algorithm for LDPC Decoders Using the First Two Minima

  • Cho, Keol;Chung, Ki-Seok
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.1987-2001
    • /
    • 2017
  • Low-density parity-check (LDPC) codes have attracted a great attention because of their excellent error correction capability with reasonably low decoding complexity. Among decoding algorithms for LDPC codes, the min-sum (MS) algorithm and its modified versions have been widely adopted due to their high efficiency in hardware implementation. In this paper, a self-adaptive MS algorithm using the difference of the first two minima is proposed for faster decoding speed and lower power consumption. Finding the first two minima is an important operation when MS-based LDPC decoders are implemented in hardware, and the found minima are often compressed using the difference of the two values to reduce interconnection complexity and memory usage. It is found that, when these difference values are bounded, decoding is not successfully terminated. Thus, the proposed method dynamically decides whether the termination-checking step will be carried out based on the difference in the two found minima. The simulation results show that the decoding speed is improved by 7%, and the power consumption is reduced by 16.34% by skipping unnecessary steps in the unsuccessful iteration without any loss in error correction performance. In addition, the synthesis results show that the hardware overhead for the proposed method is negligible.

Code Rate 1/2, 2304-b LDPC Decoder for IEEE 802.16e WiMAX (IEEE 802.16e WiMAX용 부호율 1/2, 2304-비트 LDPC 복호기)

  • Kim, Hae-Ju;Shin, Kyung-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.4A
    • /
    • pp.414-422
    • /
    • 2011
  • This paper describes a design of low-density parity-check(LDPC) decoder supporting block length 2,304-bit and code rate 1/2 of IEEE 802.16e mobile WiMAX standard. The designed LDPC decoder employs the min-sum algorithm and partially parallel layered-decoding architecture which processes a sub-matrix of $96{\times}96$ in parallel. By exploiting the properties of the min-sum algorithm, a new memory reduction technique is proposed, which reduces check node memory by 46% compared to conventional method. Functional verification results show that it has average bit-error-rate(BER) of $4.34{\times}10^{-5}$ for AWGN channel with Fb/No=2.1dB. Our LDPC decoder synthesized with a $0.18{\mu}m$ CMOS cell library has 174,181 gates and 52,992 bits memory, and the estimated throughput is about 417 Mbps at 100-MHz@l.8-V.

An analysis of BER performance of LDPC decoder for WiMAX (WiMAX용 LDPC 복호기의 비트오율 성능 분석)

  • Kim, Hae-Ju;Shin, Kyung-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.771-774
    • /
    • 2010
  • In this paper, BER performance of LDPC(Low-Density Parity-Check) decoder for WiMAX is analyzed, and optimal design conditions of LDPC decoder are derived. The min-sum LDPC decoding algorithm which is based on an approximation of LLR sum-product algorithm is modeled and simulated by Matlab, and it is analyzed that the effects of LLR approximation bit-width and maximum iteration cycles on the bit error rate(BER) performance of LDCP decoder. The parity check matrix for IEEE 802.16e standard which has block length of 2304 and code rate of 1/2 is used, and AWGN channel with QPSK modulation is assumed. The simulation results show that optimal BER performance is achieved for 7 iteration cycles and LLR bit-width of (8,6).

  • PDF

On Combining Chase-2 and Sum-Product Algorithms for LDPC Codes

  • Tong, Sheng;Zheng, Huijuan
    • ETRI Journal
    • /
    • v.34 no.4
    • /
    • pp.629-632
    • /
    • 2012
  • This letter investigates the combination of the Chase-2 and sum-product (SP) algorithms for low-density parity-check (LDPC) codes. A simple modification of the tanh rule for check node update is given, which incorporates test error patterns (TEPs) used in the Chase algorithm into SP decoding of LDPC codes. Moreover, a simple yet effective approach is proposed to construct TEPs for dealing with decoding failures with low-weight syndromes. Simulation results show that the proposed algorithm is effective in improving both the waterfall and error floor performance of LDPC codes.

LDPC Decoding by Failed Check Nodes for Serial Concatenated Code

  • Yu, Seog Kun;Joo, Eon Kyeong
    • ETRI Journal
    • /
    • v.37 no.1
    • /
    • pp.54-60
    • /
    • 2015
  • The use of serial concatenated codes is an effective technique for alleviating the error floor phenomenon of low-density parity-check (LDPC) codes. An enhanced sum-product algorithm (SPA) for LDPC codes, which is suitable for serial concatenated codes, is proposed in this paper. The proposed algorithm minimizes the number of errors by using the failed check nodes (FCNs) in LDPC decoding. Hence, the error-correcting capability of the serial concatenated code can be improved. The number of FCNs is simply obtained by the syndrome test, which is performed during the SPA. Hence, the decoding procedure of the proposed algorithm is similar to that of the conventional algorithm. The error performance of the proposed algorithm is analyzed and compared with that of the conventional algorithm. As a result, a gain of 1.4 dB can be obtained by the proposed algorithm at a bit error rate of $10^{-8}$. In addition, the error performance of the proposed algorithm with just 30 iterations is shown to be superior to that of the conventional algorithm with 100 iterations.

Design of Asynchronous Nonvolatile Memory Module using Self-diagnosis Function (자기진단 기능을 이용한 비동기용 불휘발성 메모리 모듈의 설계)

  • Shin, Woohyeon;Yang, Oh;Yeon, Jun Sang
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.1
    • /
    • pp.85-90
    • /
    • 2022
  • In this paper, an asynchronous nonvolatile memory module using a self-diagnosis function was designed. For the system to work, a lot of data must be input/output, and memory that can be stored is required. The volatile memory is fast, but data is erased without power, and the nonvolatile memory is slow, but data can be stored semi-permanently without power. The non-volatile static random-access memory is designed to solve these memory problems. However, the non-volatile static random-access memory is weak external noise or electrical shock, data can be some error. To solve these data errors, self-diagnosis algorithms were applied to non-volatile static random-access memory using error correction code, cyclic redundancy check 32 and data check sum to increase the reliability and accuracy of data retention. In addition, the possibility of application to an asynchronous non-volatile storage system requiring reliability was suggested.