• Title/Summary/Keyword: decoding order

Search Result 295, Processing Time 0.021 seconds

Effective Decoding Algorithm of Three dimensional Product Code Decoding Scheme with Single Parity Check Code (Single Parity Check 부호를 적용한 3차원 Turbo Product 부호의 효율적인 복호 알고리즘)

  • Ha, Sang-chul;Ahn, Byung-kyu;Oh, Ji-myung;Kim, Do-kyoung;Heo, Jun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.9
    • /
    • pp.1095-1102
    • /
    • 2016
  • In this paper, we propose a decoding scheme that can apply to a three dimensional turbo product code(TPC) with a single parity check code(SPC). In general, SPC is used an axis with shortest code length in order to maximize a code rate of the TPC. However, SPC does not have any error correcting capability, therefore, the error correcting capability of the three-dimensional TPC results in little improvement in comparison with the two-dimensional TPC. We propose two schemes to improve performance of three dimensional TPC decoder. One is $min^*$-sum algorithm that has advantages for low complexity implementation compared to Chase-Pyndiah algorithm. The other is a modified serial iterative decoding scheme for high performance. In addition, the simulation results for the proposed scheme are shown and compared with the conventional scheme. Finally, we introduce some practical considerations for hardware implementation.

A Program Code Compression Method with Very Fast Decoding for Mobile Devices (휴대장치를 위한 고속복원의 프로그램 코드 압축기법)

  • Kim, Yong-Kwan;Wee, Young-Cheul
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.11
    • /
    • pp.851-858
    • /
    • 2010
  • Most mobile devices use a NAND flash memory as their secondary memory. A compressed code of the firmware is stored in the NAND flash memory of mobile devices in order to reduce the size and the loading time of the firmware from the NAND flash memory to a main memory. In order to use a demand paging properly, a compressed code should be decompressed very quickly. The thesis introduces a new dictionary based compression algorithm for the fast decompression. The introduced compression algorithm uses a different method with the current LZ method by storing the "exclusive or" value of the two instructions when the instruction for compression is not equal to the referenced instruction. Therefore, the thesis introduces a new compression format that minimizes the bit operation in order to improve the speed of decompression. The experimental results show that the decoding time is reduced up to 5 times and the compression ratio is improved up to 4% compared to the zlib. Moreover, the proposed compression method with the fast decoding time leads to 10-20% speed up of booting time compared to the booting time of the uncompressed method.

Layered Receivers for System Combined Layered Space-Time Processing and Space-Time Trellis Codes (계층화 시공간 구조와 시공간 트렐리스 부호를 결합한 시스템에 적합한 계층화 수신기)

  • 임은정;김동구
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.41 no.3
    • /
    • pp.167-167
    • /
    • 2004
  • The system combined layered space-time processing and space-time trellis codes (STTC) provide high transmission rate as well as diversity and coding gain without bandwidth expansion. In this paper, two layered receiver structures are proposed. One is the LSTT-MMSE in which received bit streams are decoupled by interference nulling and then decoded by separate STTC decoders. The decoded outputs are cancelled from the received signal before advancing to the next layer detection. The other is LSTT-Whitening employing whitening rather than nulling. The receiver employing whitening process shows several advantages on diversity gain and the required number of receive antennas compare to the convolutional coded space-time processing. The proposed receivers use different decoding order scheme according to each interference suwression. The (4, 3) LSTT-Whitening receiver still achieves 1㏈ gain over the (4, 4) LSTT-MMSE and the (4, 4) coded layered space-time processing.

Layered Receivers for System Combined Layered Space-Time Processing and Space-Time Trellis Codes (계층화 시공간 구조와 시공간 트렐리스 부호를 결합한 시스템에 적합한 계층화 수신기)

  • 임은정;김동구
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.41 no.3
    • /
    • pp.9-14
    • /
    • 2004
  • The system combined layered space-time processing and space-time trellis codes (STTC) provide high transmission rate as well as diversity and coding gain without bandwidth expansion. In this paper, two layered receiver structures are proposed. One is the LSTT-MMSE in which received bit streams are decoupled by interference nulling and then decoded by separate STTC decoders. The decoded outputs are cancelled from the received signal before advancing to the next layer detection. The other is LSTT-Whitening employing whitening rather than nulling. The receiver employing whitening process shows several advantages on diversity gain and the required number of receive antennas compare to the convolutional coded space-time processing. The proposed receivers use different decoding order scheme according to each interference suwression. The (4, 3) LSTT-Whitening receiver still achieves 1㏈ gain over the (4, 4) LSTT-MMSE and the (4, 4) coded layered space-time processing.

The Decoding Approaches of Genetic Algorithm for Job Shop Scheduling Problem (Job Shop 일정계획 문제 풀이를 위한 유전 알고리즘의 복호화 방법)

  • Kim, Jun Woo
    • The Journal of Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-119
    • /
    • 2016
  • Purpose The conventional solution methods for production scheduling problems typically focus on the active schedules, which result in short makespans. However, the active schedules are more difficult to generate than the semi active schedules. In other words, semi active schedule based search strategy may help to reduce the computational costs associated with production scheduling. In this context, this paper aims to compare the performances of active schedule based and semi active schedule based search methods for production scheduling problems. Design/methodology/approach Two decoding approaches, active schedule decoding and semi active schedule decoding, are introduced in this paper, and they are used to implement genetic algorithms for classical job shop scheduling problem. The permutation representation is adopted by the genetic algorithms, and the decoding approaches are used to obtain a feasible schedule from a sequence of given operations. Findings The semi active schedule based genetic algorithm requires slightly more iterations in order to find the optimal schedule, while its execution time is quite shorter than active schedule based genetic algorithm. Moreover, the operations of semi active schedule decoding is easy to understand and implement. Consequently, this paper concludes that semi active schedule based search methods also can be useful if effective search strategies are given.

Upper Bounds for the Performance of Turbo-Like Codes and Low Density Parity Check Codes

  • Chung, Kyu-Hyuk;Heo, Jun
    • Journal of Communications and Networks
    • /
    • v.10 no.1
    • /
    • pp.5-9
    • /
    • 2008
  • Researchers have investigated many upper bound techniques applicable to error probabilities on the maximum likelihood (ML) decoding performance of turbo-like codes and low density parity check (LDPC) codes in recent years for a long codeword block size. This is because it is trivial for a short codeword block size. Previous research efforts, such as the simple bound technique [20] recently proposed, developed upper bounds for LDPC codes and turbo-like codes using ensemble codes or the uniformly interleaved assumption. This assumption bounds the performance averaged over all ensemble codes or all interleavers. Another previous research effort [21] obtained the upper bound of turbo-like code with a particular interleaver using a truncated union bound which requires information of the minimum Hamming distance and the number of codewords with the minimum Hamming distance. However, it gives the reliable bound only in the region of the error floor where the minimum Hamming distance is dominant, i.e., in the region of high signal-to-noise ratios. Therefore, currently an upper bound on ML decoding performance for turbo-like code with a particular interleaver and LDPC code with a particular parity check matrix cannot be calculated because of heavy complexity so that only average bounds for ensemble codes can be obtained using a uniform interleaver assumption. In this paper, we propose a new bound technique on ML decoding performance for turbo-like code with a particular interleaver and LDPC code with a particular parity check matrix using ML estimated weight distributions and we also show that the practical iterative decoding performance is approximately suboptimal in ML sense because the simulation performance of iterative decoding is worse than the proposed upper bound and no wonder, even worse than ML decoding performance. In order to show this point, we compare the simulation results with the proposed upper bound and previous bounds. The proposed bound technique is based on the simple bound with an approximate weight distribution including several exact smallest distance terms, not with the ensemble distribution or the uniform interleaver assumption. This technique also shows a tighter upper bound than any other previous bound techniques for turbo-like code with a particular interleaver and LDPC code with a particular parity check matrix.

Performance Analysis of MAP Algorithm by Robust Equalization Techniques in Nongaussian Noise Channel (비가우시안 잡음 채널에서 Robust 등화기법을 이용한 터보 부호의 MAP 알고리즘 성능분석)

  • 소성열
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.9A
    • /
    • pp.1290-1298
    • /
    • 2000
  • Turbo Code decoder is an iterate decoding technology, which extracts extrinsic information from the bit to be decoded by calculating both forward and backward metrics, and uses the information to the next decoding step Turbo Code shows excellent performance, approaching Shannon Limit at the view of BER, when the size of Interleaver is big and iterate decoding is run enough. But it has the problems which are increased complexity and delay and difficulty of real-time processing due to Interleaver and iterate decoding. In this paper, it is analyzed that MAP(maximum a posteriori) algorithm which is used as one of Turbo Code decoding, and the factor which determines its performance. MAP algorithm proceeds iterate decoding by determining soft decision value through the environment and transition probability between all adjacent bits and received symbols. Therefore, to improve the performance of MAP algorithm, the trust between adjacent received symbols must be ensured. However, MAP algorithm itself, can not do any action for ensuring so the conclusion is that it is needed more algorithm, so to decrease iterate decoding. Consequently, MAP algorithm and Turbo Code performance are analyzed in the nongaussian channel applying Robust equalization technique in order to input more trusted information into MAP algorithm for the received symbols.

  • PDF

Two New Types of Candidate Symbol Sorting Schemes for Complexity Reduction of a Sphere Decoder

  • Jeon, Eun-Sung;Kim, Yo-Han;Kim, Dong-Ku
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.888-894
    • /
    • 2007
  • The computational complexity of a sphere decoder (SD) is conventionally reduced by decoding order scheme which sorts candidate symbols in the ascending order of the Euclidean distance from the output of a zero-forcing (ZF) receiver. However, since the ZF output may not be a reliable sorting reference, we propose two types of sorting schemes to allow faster decoding. The first is to use the newly found lattice points in the previous search round instead of the ZF output (Type I). Since these lattice points are closer to the received signal than the ZF output, they can serve as a more reliable sorting reference for finding the maximum likelihood (ML) solution. The second sorting scheme is to sort candidate symbols in descending order according to the number of candidate symbols in the following layer, which are called child symbols (Type II). These two proposed sorting schemes can be combined with layer sorting for more complexity reduction. Through simulation, the Type I and Type II sorting schemes were found to provide 12% and 20% complexity reduction respectively over conventional sorting schemes. When they are combined with layer sorting, Type I and Type II provide an additional 10-15% complexity reduction while maintaining detection performance.

A Design of Pipelined-parallel CABAC Decoder Adaptive to HEVC Syntax Elements (HEVC 구문요소에 적응적인 파이프라인-병렬 CABAC 복호화기 설계)

  • Bae, Bong-Hee;Kong, Jin-Hyeung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.5
    • /
    • pp.155-164
    • /
    • 2015
  • This paper describes a design and implementation of CABAC decoder, which would handle HEVC syntax elements in adaptively pipelined-parallel computation manner. Even though CABAC offers the high compression rate, it is limited in decoding performance due to context-based sequential computation, and strong data dependency between context models, as well as decoding procedure bin by bin. In order to enhance the decoding computation of HEVC CABAC, the flag-type syntax elements are adaptively pipelined by precomputing consecutive flag-type ones; and multi-bin syntax elements are decoded by processing bins in parallel up to three. Further, in order to accelerate Binary Arithmetic Decoder by reducing the critical path delay, the update and renormalization of context modeling are precomputed parallel for the cases of LPS as well as MPS, and then the context modeling renewal is selected by the precedent decoding result. It is simulated that the new HEVC CABAC architecture could achieve the max. performance of 1.01 bins/cycle, which is two times faster with respect to the conventional approach. In ASIC design with 65nm library, the CABAC architecture would handle 224 Mbins/sec, which could decode QFHD HEVC video data in real time.

A Leakage-Based Solution for Interference Alignment in MIMO Interference Channel Networks

  • Shrestha, Robin;Bae, Insan;Kim, Jae Moung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.2
    • /
    • pp.424-442
    • /
    • 2014
  • Most recent research on iterative solutions for interference alignment (IA) presents solutions assuming channel reciprocity based on the suppression of interference from undesired sources by using an appropriate decoding matrix also known as a receiver combining matrix for multiple input multiple output (MIMO) interference channel networks and reciprocal networks. In this paper, we present an alternative solution for IA by designing precoding and decoding matrices based on the concept of signal leakage (the measure of signal power that leaks to unintended users) on each transmit side. We propose an iterative algorithm for an IA solution based on maximization of the signal-to-leakage-and-noise ratio (SLNR) of the transmitted signal from each transmitter. In order to make an algorithm removing the requirement of channel reciprocity, we deploy maximization of the signal-to-interference-and-noise ratio (SINR) in the design of the decoding matrices. We show through simulation that minimizing the leakage in each transmission can help achieve enhanced performance in terms of aggregate sum capacity in the system.