• Title/Summary/Keyword: sum-product algorithm

Search Result 61, Processing Time 0.019 seconds

Performance and Convergence Analysis of Tree-LDPC codes on the Min-Sum Iterative Decoding Algorithm (Min-Sum 반복 복호 알고리즘을 사용한 Tree-LDPC의 성능과 수렴 분석)

  • Noh Kwang-seok;Heo Jun;Chung Kyuhyuk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.1C
    • /
    • pp.20-25
    • /
    • 2006
  • In this paper, the performance of Tree-LDPC code is presented based on the min-sum algorithm with scaling and the asymptotic performance in the water fall region is shown by density evolution. We presents that the Tree-LDPC code show a significant performance gain by scaling with the optimal scaling factor which is obtained by density evolution methods. We also show that the performance of min-sum with scaling is as good as the performance of sum-product while the decoding complexity of min-sum algorithm is much lower than that of sum-product algorithm. The Tree-LDPC decoder is implemented on a FPGA chip with a small interleaver size.

Convergence of Min-Sum Decoding of LDPC codes under a Gaussian Approximation (MIN-SUM 복호화 알고리즘을 이용한 LDPC 오류정정부호의 성능분석)

  • Heo, Jun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.10C
    • /
    • pp.936-941
    • /
    • 2003
  • Density evolution was developed as a method for computing the capacity of low-density parity-check(LDPC) codes under the sum-product algorithm [1]. Based on the assumption that the passed messages on the belief propagation model can be approximated well by Gaussian random variables, a modified and simplified version of density evolution technique was introduced in [2]. Recently, the min-sum algorithm was applied to the density evolution of LDPC codes as an alternative decoding algorithm in [3]. Next question is how the min-sum algorithm is combined with a Gaussian approximation. In this paper, the capacity of various rate LDPC codes is obtained using the min-sum algorithm combined with the Gaussian approximation, which gives a simplest way of LDPC code analysis. Unlike the sum-product algorithm, the symmetry condition [4] is not maintained in the min-sum algorithm. Therefore, the variance as well as the mean of Gaussian distribution are recursively computed in this analysis. It is also shown that the min-sum threshold under a gaussian approximation is well matched to the simulation results.

New Simplified Sum-Product Algorithm for Low Complexity LDPC Decoding (복잡도를 줄인 LDPC 복호를 위한 새로운 Simplified Sum-Product 알고리즘)

  • Han, Jae-Hee;SunWoo, Myung-Hoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.3C
    • /
    • pp.322-328
    • /
    • 2009
  • This paper proposes new simplified sum-product (SSP) decoding algorithm to improve BER performance for low-density parity-check codes. The proposed SSP algorithm can replace multiplications and divisions with additions and subtractions without extra computations. In addition, the proposed SSP algorithm can simplify both the In[tanh(x)] and tanh-1 [exp(x)] by using two quantization tables which can reduce tremendous computational complexity. Moreover, the simulation results show that the proposed SSP algorithm can improve about $0.3\;{\sim}\;0.8\;dB$ of BER performance compared with the existing modified sum-product algorithms.

A Modified Sum-Product Algorithm for Error Floor Reduction in LDPC Codes (저밀도 패리티 검사부호에서 오류마루 감소를 위한 수정 합-곱 알고리즘)

  • Yu, Seog-Kun;Kang, Seog-Geun;Joo, Eon-Kyeong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.5C
    • /
    • pp.423-431
    • /
    • 2010
  • In this paper, a modified sum-product algorithm to correct bit errors captured within the trapping sets, which are produced in decoding of low-density parity-check (LDPC) codes, is proposed. Unlike the original sum-product algorithm, the proposed decoding method consists of two stages. Whether the main cause of decoding failure is the trapping sets or not is determined at the first stage. And the bit errors within the trapping sets are corrected at the second stage. In the modified algorithm, the set of failed check nodes and the transition patterns of hard-decision bits are exploited to search variable nodes in the trapping sets. After inverting information of the variable nodes, the sum-product algorithm is carried out to correct the bit errors. As a result of simulation, the proposed algorithm shows continuously improved error performance with increase in the signal-to-noise ratio. It is, therefore, considered that the modified sum-product algorithm significantly reduces or possibly eliminates the error floor in LDPC codes.

On Combining Chase-2 and Sum-Product Algorithms for LDPC Codes

  • Tong, Sheng;Zheng, Huijuan
    • ETRI Journal
    • /
    • v.34 no.4
    • /
    • pp.629-632
    • /
    • 2012
  • This letter investigates the combination of the Chase-2 and sum-product (SP) algorithms for low-density parity-check (LDPC) codes. A simple modification of the tanh rule for check node update is given, which incorporates test error patterns (TEPs) used in the Chase algorithm into SP decoding of LDPC codes. Moreover, a simple yet effective approach is proposed to construct TEPs for dealing with decoding failures with low-weight syndromes. Simulation results show that the proposed algorithm is effective in improving both the waterfall and error floor performance of LDPC codes.

LDPC Decoding by Failed Check Nodes for Serial Concatenated Code

  • Yu, Seog Kun;Joo, Eon Kyeong
    • ETRI Journal
    • /
    • v.37 no.1
    • /
    • pp.54-60
    • /
    • 2015
  • The use of serial concatenated codes is an effective technique for alleviating the error floor phenomenon of low-density parity-check (LDPC) codes. An enhanced sum-product algorithm (SPA) for LDPC codes, which is suitable for serial concatenated codes, is proposed in this paper. The proposed algorithm minimizes the number of errors by using the failed check nodes (FCNs) in LDPC decoding. Hence, the error-correcting capability of the serial concatenated code can be improved. The number of FCNs is simply obtained by the syndrome test, which is performed during the SPA. Hence, the decoding procedure of the proposed algorithm is similar to that of the conventional algorithm. The error performance of the proposed algorithm is analyzed and compared with that of the conventional algorithm. As a result, a gain of 1.4 dB can be obtained by the proposed algorithm at a bit error rate of $10^{-8}$. In addition, the error performance of the proposed algorithm with just 30 iterations is shown to be superior to that of the conventional algorithm with 100 iterations.

A High Speed LDPC Decoder Structure Based on the HSS (HSS 기반 초고속 LDPC 복호를 위한 구조)

  • Lee, In-Ki;Kim, Min-Hyuk;Oh, Deock-Gil;Jung, Ji-Won
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38B no.2
    • /
    • pp.140-145
    • /
    • 2013
  • This paper proposes the high speed LDPC decoder structure base on the DVB-S2. Firstly, We study the solution to avoid the memory conflict. For the high speed decoding process the decoder adapts the HSS(Horizontal Shuffle Scheduling) scheme. Secondly, for the high speed decoding algorithm normalized Min-Sum algorithm is adapted instead of Sum-Product algorithm. And the self corrected is a variant of the LDPC decoding that sets the reliability of a Mc${\rightarrow}$v message to 0 if there is an inconsistency between the signs of the current incoming messages Mv'${\rightarrow}$c and the sign of the previous incoming messages Moldv'${\rightarrow}$c This self-corrected algorithm avoids the propagation on unreliable information in the Tanner graph and thus, helps the convergence of the decoder.Start after striking space key 2 times. Lastly, and this paper propose the optimal hardware architecture supporting the high speed throughput.

Simplified 2-Dimensional Scaled Min-Sum Algorithm for LDPC Decoder

  • Cho, Keol;Lee, Wang-Heon;Chung, Ki-Seok
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.3
    • /
    • pp.1262-1270
    • /
    • 2017
  • Among various decoding algorithms of low-density parity-check (LDPC) codes, the min-sum (MS) algorithm and its modified algorithms are widely adopted because of their computational simplicity compared to the sum-product (SP) algorithm with slight loss of decoding performance. In the MS algorithm, the magnitude of the output message from a check node (CN) processing unit is decided by either the smallest or the next smallest input message which are denoted as min1 and min2, respectively. It has been shown that multiplying a scaling factor to the output of CN message will improve the decoding performance. Further, Zhong et al. have shown that multiplying different scaling factors (called a 2-dimensional scaling) to min1 and min2 much increases the performance of the LDPC decoder. In this paper, the simplified 2-dimensional scaled (S2DS) MS algorithm is proposed. In the proposed algorithm, we figure out a pair of the most efficient scaling factors which multiplications can be replaced with combinations of addition and shift operations. Furthermore, one scaling operation is approximated by the difference between min1 and min2. The simulation results show that S2DS achieves the error correcting performance which is close to or outperforms the SP algorithm regardless of coding rates, and its computational complexity is the lowest comparing to modified versions of MS algorithms.

A Study on Efficient CNU Algorithm for High Speed LDPC decoding in DVB-S2 (DVB-S2 기반 고속 LDPC 복호를 위한 효율적인 CNU 계산방식에 관한 연구)

  • Lim, Byeong-Su;Kim, Min-Hyuk;Jung, Ji-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.9
    • /
    • pp.1892-1897
    • /
    • 2012
  • In this paper, efficient CNU(Check Node Update) algorithms are analyzed for high speed LDPC decoding in DVB-S2 standard. In aspect to CNU methods, there are some kinds of CNU methods. Among of them, MP (Min Product) method is quite often used in LDPC decoding. However MP needs LUT (Look Up Table) that is critical path in LDPC decoding speed. A new SC-NMS (Self-Corrected Normalized Min-Sum) method is proposed in the paper. NMS needs only normalized scaling factor instead of LUT and compensates the overestimation of MP approximation. In addition, SC method is proposed. It gives a faster convergence toward a decoded codeword. If a message change its sign between two iterations, it is not reliable and to avoid to propagate noisy information, its module is set to 0. The performance of SC-NMS has a little degrade compare to MP by 0.1 dB, however considering computational complexity and decoding speed, SC-NMS algorithm is optimal method for CNU algorithm.

Study on Low Density Parity Check Coded OFDM on Fading channel (페이딩 채널에서 LDPC 부호화 OFDM에 대한 연구)

  • Kang, Hee-Hoon;Lee, Young-Jong;Han, Won-Ok
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.42 no.3
    • /
    • pp.51-56
    • /
    • 2005
  • To improve the BER of OFDM on a fading channel, a low-density parity check coded OFDM system is proposed in this paper. LDPC codes are decoded with Sum-Product or Belief Propagation Algorithm known by probability propagation algorithm. When LDPC codes are applied to OFDM system, the BER performance is dependant on the iteration number of decoding. To improve the spectral efficiency, multi-level modulations are used in mobile communication system. But, It is not clear how to decode LDPC code used in OFDM with multi-level modulations. In the paper, a decoding algorithm is described for LDPC coded OFDM with MPSK. When use the proposed decoding algorithm, we get the good BER for AWGN and a Fading Channel. Simulation results show that the proposed decoding algorithm is confirmed LDPC coded OFDM with MPSK.