• Title/Summary/Keyword: Decoding algorithm

Search Result 683, Processing Time 0.038 seconds

Signal Detection with Sphere Decoding Algorithm at MIMO Channel (MIMO채널에서 Sphere Decoding 알고리즘을 이용한 신호검파)

  • An, Jin-Young;Kang, Yun-Jeong;Kim, Sang-Choon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.10
    • /
    • pp.2197-2204
    • /
    • 2009
  • In this paper, we analyze the performance of the sphere decoding algorithm at MIMO system. The BER performance of this algorithm is the same as that of ML receiver, but computational complexity of SD algorithm is much less than that of ML receiver. The independent signals from each transmit antennas are modulated by using the QPSK and 16QAM modulation in the richly scattered Rayleigh flat-fading channel. The received signals from each receivers is independently detected by the receiver using Fincke & Pohst SD algorithm, and the BER output of the algorithm is compared with those of ZF, MMSE, SIC, and ML receivers. We also investigate the Viterbo & Boutros SD algorithm which is the modified SD algorithm, and the BER performance and the floting point operations of the algorithms are comparatively studied.

An FPGA Implementation of High-Speed Adaptive Turbo Decoder

  • Kim, Min-Huyk;Jung, Ji-Won;Bae, Jong-Tae;Choi, Seok-Soon;Lee, In-Ki
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.4C
    • /
    • pp.379-388
    • /
    • 2007
  • In this paper, we propose an adaptive turbo decoding algorithm for high order modulation scheme combined with originally design for a standard rate-1/2 turbo decoder for B/QPSK modulation. A transformation applied to the incoming I-channel and Q-channel symbols allows the use of an off-the-shelf B/QPSK turbo decoder without any modifications. Adaptive turbo decoder process the received symbols recursively to improve the performance. As the number of iterations increase, the execution time and power consumption also increase as well. The source of the latency and power consumption reduction is from the combination of the radix-4, dual-path processing, parallel decoding, and early-stop algorithms. We implemented the proposed scheme on a field-programmable gate array (FPGA) and compared its decoding speed with that of a conventional decoder. From the result of implementation, we confirm that the decoding speed of proposed adaptive decoding is faster than conventional scheme by 6.4 times.

An Improved Belief Propagation Decoding for LT Codes (LT 부호를 위한 개선된 BP 복호)

  • Cheong, Ho-Young
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.7 no.4
    • /
    • pp.223-228
    • /
    • 2014
  • It is known that a belief propagation algorithm is a fast decoding scheme for LT codes but it require a large overhead, especially for a short block length LT codes. In this paper an improved belief decoding algorithm using searching method for degree-1 packets is proposed for a small overhead. The proposed decoding scheme shows the desirable performance in terms of overhead while guaranteeing the same computational complexity with respect to the conventional BP decoding scheme.

SISO-RLL Decoding Algorithm of 17PP Modulation Code for High Density Optical Recording Channel (고밀도 광 기록 채널에서 17PP 변조 부호의 연판정 입력 연판정 출력 런-길이 제한 복호 알고리즘)

  • Lee, Bong-Il;Lee, Jae-Jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.2C
    • /
    • pp.175-180
    • /
    • 2009
  • When we apply the LDPC code for high density optical storage channel, it is necessary to make an algorithm that the modulation code decoder must feed the LDPC decoder soft-valued information because LDPC decoder exploits soft values using the soft input. Therefore, we propose the soft-input soft-output run-length limited 17PP decoding algorithm and compare performance of LDPC codes. Consequently, we found that the proposed soft-input soft-output decoding algorithm using 17PP is 0.8dB better than the soft-input soft-output decoding algorithm using (1, 7) RLL.

Efficient Multi-way Tree Search Algorithm for Huffman Decoder

  • Cha, Hyungtai;Woo, Kwanghee
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.1
    • /
    • pp.34-39
    • /
    • 2004
  • Huffman coding which has been used in many data compression algorithms is a popular data compression technique used to reduce statistical redundancy of a signal. It has been proposed that the Huffman algorithm can decode efficiently using characteristics of the Huffman tables and patterns of the Huffman codeword. We propose a new Huffman decoding algorithm which used a multi way tree search and present an efficient hardware implementation method. This algorithm has a small logic area and memory space and is optimized for high speed decoding. The proposed Huffman decoding algorithm can be applied for many multimedia systems such as MPEG audio decoder.

A fast fractal decoding algorithm using averaged-image estimation (평균 영상 추정을 이용한 고속 플랙탈 영상 복원 알고리즘)

  • 문용호;박태희;김재호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.9A
    • /
    • pp.2355-2364
    • /
    • 1998
  • In conventional fractal decoding procedure, the reconstructed image is obtained by a rpredefined number of iterations starting with an arbitrary initial image. Its convergence speed depends on the selection of the initial image. It should be solved to get high speed convergence. In this paper, we theoretically reveal that conventional method is approximately decomposed into the decoding of the DC and AC components. Based on this fact, we proposed a novel fast fractal decoding algorithm made up of two steps. The averaged-image considered as an optimal initial image is estimated in the first step. In the second step, the reconstructe dimag eis genrated from the output image obtained in the first step. From the simulations, it is shown that the output image of the first step approximately converges to the averaged-image with only 15% calculations for one iteration of conventional method. And the proposed method is faster than various decoding mehtods and evenly equal to conventioanl decoding with the averaged-image. In addition, the proposed method can be applied to the compressed data resulted from the various encoding methods because it does not impose any constraints in the encoding procedure to get high decoding speed.

  • PDF

Performance Analysis of MAP Algorithm by Robust Equalization Techniques in Nongaussian Noise Channel (비가우시안 잡음 채널에서 Robust 등화기법을 이용한 터보 부호의 MAP 알고리즘 성능분석)

  • 소성열
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.9A
    • /
    • pp.1290-1298
    • /
    • 2000
  • Turbo Code decoder is an iterate decoding technology, which extracts extrinsic information from the bit to be decoded by calculating both forward and backward metrics, and uses the information to the next decoding step Turbo Code shows excellent performance, approaching Shannon Limit at the view of BER, when the size of Interleaver is big and iterate decoding is run enough. But it has the problems which are increased complexity and delay and difficulty of real-time processing due to Interleaver and iterate decoding. In this paper, it is analyzed that MAP(maximum a posteriori) algorithm which is used as one of Turbo Code decoding, and the factor which determines its performance. MAP algorithm proceeds iterate decoding by determining soft decision value through the environment and transition probability between all adjacent bits and received symbols. Therefore, to improve the performance of MAP algorithm, the trust between adjacent received symbols must be ensured. However, MAP algorithm itself, can not do any action for ensuring so the conclusion is that it is needed more algorithm, so to decrease iterate decoding. Consequently, MAP algorithm and Turbo Code performance are analyzed in the nongaussian channel applying Robust equalization technique in order to input more trusted information into MAP algorithm for the received symbols.

  • PDF

CDMA/TDD system using improved sequential decoding algorithm (개선된 순차적 복호 기법을 적용한 CDMA/TDD 시스템의 성능 분석)

  • Jo, Seong-Cheol;Gwon, Dong-Seung;Jo, Gyeong-Rok
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.39 no.8
    • /
    • pp.1-6
    • /
    • 2002
  • In this paper, we considered the CDMA/TDD system suitable for high-speed packet data transmission such as Internet and multimedia services, and a sequential decoding scheme which enables fast decoding and retransmission requirement. In addition, we Proposed an improved FANO algorithm, which adopts the competition path in order to reduce the number of revisit nodes. The conventional FANO algorithm suffered from the drawback of much more revisit nodes. Furthermore, we analyzed the performance of the CDMA/TDD system with the sequential decoding scheme we proposed over multipath channel.

A Noble Decoding Algorithm Using MLLR Adaptation for Speaker Verification (MLLR 화자적응 기법을 이용한 새로운 화자확인 디코딩 알고리듬)

  • 김강열;김지운;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2
    • /
    • pp.190-198
    • /
    • 2002
  • In general, we have used the Viterbi algorithm of Speech recognition for decoding. But a decoder in speaker verification has to recognize same word of every speaker differently. In this paper, we propose a noble decoding algorithm that could replace the typical Viterbi algorithm for the speaker verification system. We utilize for the proposed algorithm the speaker adaptation algorithms that transform feature vectors into the region of the client' characteristics in the speech recognition. There are many adaptation algorithms, but we take MLLR (Maximum Likelihood Linear Regression) and MAP (Maximum A-Posterior) adaptation algorithms for proposed algorithm. We could achieve improvement of performance about 30% of EER (Equal Error Rate) using proposed algorithm instead of the typical Viterbi algorithm.

Simplified 2-Dimensional Scaled Min-Sum Algorithm for LDPC Decoder

  • Cho, Keol;Lee, Wang-Heon;Chung, Ki-Seok
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.3
    • /
    • pp.1262-1270
    • /
    • 2017
  • Among various decoding algorithms of low-density parity-check (LDPC) codes, the min-sum (MS) algorithm and its modified algorithms are widely adopted because of their computational simplicity compared to the sum-product (SP) algorithm with slight loss of decoding performance. In the MS algorithm, the magnitude of the output message from a check node (CN) processing unit is decided by either the smallest or the next smallest input message which are denoted as min1 and min2, respectively. It has been shown that multiplying a scaling factor to the output of CN message will improve the decoding performance. Further, Zhong et al. have shown that multiplying different scaling factors (called a 2-dimensional scaling) to min1 and min2 much increases the performance of the LDPC decoder. In this paper, the simplified 2-dimensional scaled (S2DS) MS algorithm is proposed. In the proposed algorithm, we figure out a pair of the most efficient scaling factors which multiplications can be replaced with combinations of addition and shift operations. Furthermore, one scaling operation is approximated by the difference between min1 and min2. The simulation results show that S2DS achieves the error correcting performance which is close to or outperforms the SP algorithm regardless of coding rates, and its computational complexity is the lowest comparing to modified versions of MS algorithms.