• Title/Summary/Keyword: Run-time decoding

Search Result 20, Processing Time 0.027 seconds

Fano Decoding with Timeout: Queuing Analysis

  • Pan, W. David;Yoo, Seong-Moo
    • ETRI Journal
    • /
    • v.28 no.3
    • /
    • pp.301-310
    • /
    • 2006
  • In mobile communications, a class of variable-complexity algorithms for convolutional decoding known as sequential decoding algorithms is of interest since they have a computational time that could vary with changing channel conditions. The Fano algorithm is one well-known version of a sequential decoding algorithm. Since the decoding time of a Fano decoder follows the Pareto distribution, which is a heavy-tailed distribution parameterized by the channel signal-to-noise ratio (SNR), buffers are required to absorb the variable decoding delays of Fano decoders. Furthermore, since the decoding time drawn by a certain Pareto distribution can become unbounded, a maximum limit is often employed by a practical decoder to limit the worst-case decoding time. In this paper, we investigate the relations between buffer occupancy, decoding time, and channel conditions in a system where the Fano decoder is not allowed to run with unbounded decoding time. A timeout limit is thus imposed so that the decoding will be terminated if the decoding time reaches the limit. We use discrete-time semi-Markov models to describe such a Fano decoding system with timeout limits. Our queuing analysis provides expressions characterizing the average buffer occupancy as a function of channel conditions and timeout limits. Both numerical and simulation results are provided to validate the analytical results.

  • PDF

Tiled Image Compression Method to Reduce the Amount of Memory Needed for Image Processing in Mobile Devices (모바일 단말기에서 이미지 처리에 필요한 메모리 사용량을 줄이기 위한 타일화 이미지 압축 기법)

  • Oh, Hwang-Seok
    • Journal of Korea Game Society
    • /
    • v.13 no.6
    • /
    • pp.35-42
    • /
    • 2013
  • A new compressed image format is proposed to use a large size of image in mobile games without the constraints of hardware specifications such as memory amount, processing power, which encodes each block of a large size image in scan line order. Using the experiments, we show the effectiveness of proposed method compared with a general PNG in terms of compression ratios and required memory in decoding processes. Also, the loading delay can be reduced by decoding only the displaying area of a large image in run-time.

A study on the implementation of an ASN.1 toll set for various macro processing (다양한 마크로 처리를 위한 ASN.1 도구 세트의 구현에 관한연구)

  • 김홍렬;임제탁
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.33A no.6
    • /
    • pp.1-10
    • /
    • 1996
  • Protocol specifications and service definitions for distributed open system applications are defined using ASN.1. Therefore, to implement an open system application likes MHS, it is necessary to have well defined encoding/decoding modules which translate ASN.1 protocol specifications into their transfer syntaxes. However, that work is usually tedius, time consuming, and error prone. In this paper, we designed and implemented a new ASN.1 tool set which includes a new ASN.1 run-time library, called HY BER/DER, and an enhanced ASN.1-to-C compiler, called HYASNC$^{+}$. HYASNC$^{+}$ automatically generates C language encoder/decoder stub files and heder files for basic ASN.1 types and subtypes defiend in X.208 recommandation, and all X400 MHS system macro definitions. And, we evaluated the performance of HYASNC$^{+}$ compiler and HY BER/DER run-tiem library, and tested the interoperability of ASN.1 run-time library.

  • PDF

Efficient Hardware Implementation of Real-time Rectification using Adaptively Compressed LUT

  • Kim, Jong-hak;Kim, Jae-gon;Oh, Jung-kyun;Kang, Seong-muk;Cho, Jun-Dong
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.1
    • /
    • pp.44-57
    • /
    • 2016
  • Rectification is used as a preprocessing to reduce the computation complexity of disparity estimation. However, rectification also requires a complex computation. To minimize the computing complexity, rectification using a lookup-table (R-LUT) has been introduced. However, since, the R-LUT consumes large amount of memory, rectification with compressed LUT (R-CLUT) has been introduced. However, the more we reduce the memory consumption, the more we need decoding overhead. Therefore, we need to attain an acceptable trade-off between the size of LUT and decoding overhead. In this paper, we present such a trade-off by adaptively combining simple coding methods, such as differential coding, modified run-length coding (MRLE), and Huffman coding. Differential coding is applied to transform coordinate data into a differential form in order to further improve the coding efficiency along with Huffman coding for better stability and MRLE for better performance. Our experimental results verified that our coding scheme yields high performance with maintaining robustness. Our method showed about ranging from 1 % to 16 % lower average inverse of compression ratio than the existing methods. Moreover, we maintained low latency with tolerable hardware overhead for real-time implementation.

Performance Improvement of Virtualization Sensitive Instruction Emulation by Instruction Decoding at Compile Time (컴파일 시간 명령어 디코딩을 통한 가상화 민감 명령어 에뮬레이션 성능 개선)

  • Shin, Dong-Ha;Yun, Kyung-Un
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.2
    • /
    • pp.1-11
    • /
    • 2012
  • Recently, we have seen several implementations that virtualize the ARM architecture. Since the current ARM architecture is not possible to be virtualized using the traditional technique called "trap-and-emulation", we usually detect all virtualization sensitive instructions during the run-time of a guest kernel and emulate them virtually rather than executing them directly. The emulation for virtualization is usually implemented either by binary translation or interpretation. Our research is about how to improve the performance of emulation for virtualization based on interpretation. The interpretation usually requires a few steps: instruction fetching, instruction decoding and instruction executing. In this paper, we propose a method that decodes all virtualization sensitive instructions during the compilation time of a guest kernel and reduces the time required for interpretation during the run time of the guest kernel. Our method provides both implementation simplicity and performance improvement of emulation for virtualization based on interpretation.

DCT Coefficient Block Size Classification for Image Coding (영상 부호화를 위한 DCT 계수 블럭 크기 분류)

  • Gang, Gyeong-In;Kim, Jeong-Il;Jeong, Geun-Won;Lee, Gwang-Bae;Kim, Hyeon-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.3
    • /
    • pp.880-894
    • /
    • 1997
  • In this paper,we propose a new algorithm to perform DCT(Discrete Cosine Transform) withn the area reduced by prdeicting position of quantization coefficients to be zero.This proposed algorithm not only decreases the enoding time and the decoding time by reducing computation amount of FDCT(Forward DCT)and IDCT(Inverse DCT) but also increases comprossion ratio by performing each diffirent horizontal- vereical zig-zag scan assording to the calssified block size for each block on the huffiman coeing.Traditional image coding method performs the samd DCT computation and zig-zag scan over all blocks,however this proposed algorthm reduces FDCT computation time by setting to zero insted of computing DCT for quantization codfficients outside classfified block size on the encoding.Also,the algorithm reduces IDCT computation the by performing IDCT for only dequantization coefficients within calssified block size on the decoding.In addition, the algorithm reduces Run-Length by carrying out horizontal-vertical zig-zag scan approriate to the slassified block chraateristics,thus providing the improverment of the compression ratio,On the on ther hand,this proposed algorithm can be applied to 16*16 block processing in which the compression ratio and the image resolution are optimal but the encoding time and the decoding time take long.Also,the algorithm can be extended to motion image coding requirng real time processing.

  • PDF

The design of a 32-bit Microprocessor for a Sequence Control using an Application Specification Integrated Circuit(ASIC) (ICEIC'04)

  • Oh Yang
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.486-490
    • /
    • 2004
  • Programmable logic controller (PLC) is widely used in manufacturing system or process control. This paper presents the design of a 32-bit microprocessor for a sequence control using an Application Specification Integrated Circuit (ASIC). The 32-bit microprocessor was designed by a VHDL with top down method; the program memory was separated from the data memory for high speed execution of 274 specified sequence instructions. Therefore it was possible that sequence instructions could be operated at the same time during the instruction fetch cycle. And in order to reduce the instruction decoding time and the interface time of the data memory interface, an instruction code size was implemented by 32-bits. And the real time debugging as single step run, break point run was implemented. Pulse instruction, step controller, master controllers, BIN and BCD type arithmetic instructions, barrel shit instructions were implemented for many used in PLC system. The designed microprocessor was synthesized by the S1L50000 series which contains 70,000 gates with 0.65um technology of SEIKO EPSON. Finally, the benchmark was performed to show that designed 32-bit microprocessor has better performance than Q4A PLC of Mitsubishi Corporation.

  • PDF

Performance Analysis of MAP Algorithm by Robust Equalization Techniques in Nongaussian Noise Channel (비가우시안 잡음 채널에서 Robust 등화기법을 이용한 터보 부호의 MAP 알고리즘 성능분석)

  • 소성열
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.9A
    • /
    • pp.1290-1298
    • /
    • 2000
  • Turbo Code decoder is an iterate decoding technology, which extracts extrinsic information from the bit to be decoded by calculating both forward and backward metrics, and uses the information to the next decoding step Turbo Code shows excellent performance, approaching Shannon Limit at the view of BER, when the size of Interleaver is big and iterate decoding is run enough. But it has the problems which are increased complexity and delay and difficulty of real-time processing due to Interleaver and iterate decoding. In this paper, it is analyzed that MAP(maximum a posteriori) algorithm which is used as one of Turbo Code decoding, and the factor which determines its performance. MAP algorithm proceeds iterate decoding by determining soft decision value through the environment and transition probability between all adjacent bits and received symbols. Therefore, to improve the performance of MAP algorithm, the trust between adjacent received symbols must be ensured. However, MAP algorithm itself, can not do any action for ensuring so the conclusion is that it is needed more algorithm, so to decrease iterate decoding. Consequently, MAP algorithm and Turbo Code performance are analyzed in the nongaussian channel applying Robust equalization technique in order to input more trusted information into MAP algorithm for the received symbols.

  • PDF

Implemention of the Real-time MPEG Layer III Audio Decoder (MPEG 계층 III 오디오 복호기 실시간 구현에 관한 연구)

  • 김수현;김진호;이창원;김헌중;차형태
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.1123-1126
    • /
    • 1999
  • In this paper, we propose a real-time implementation of the MPEG-1 layer III and MPEG-2 layer III LSF audio decoding system based on OAK DSP Core. In order to solve the problem of resolution, the system has been used floating-point operation and double precision in dequantization module. The size of ROM is reduced by using the Run-length algorithm of reordered index. The subband synthesis filter module is optimized to have low computational complexity in terms of the size of ROM or RAM. To construct a efficient system, we used both the DSP Core and Parser-Huffman decoder which is implemented with VHDL.

  • PDF

Image Compression using Validity and Zero Coefficients by DCT(Discrete Cosine Transform) (DCT에서 유효계수와 Zero계수를 이용한 영상 압축)

  • Kim, Jang Won;Han, Sang Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.1 no.3
    • /
    • pp.97-103
    • /
    • 2008
  • In this paper, $256{\times}256$ input image is classified into a validity block and an edge block of $8{\times}8$ block for image compression. DCT(Discrete Cosine Transform) is executed only for the DC coefficient that is validity coefficients for a validity block. Predict the position where a quantization coefficient becomes 0 for an edge block, I propose new algorithm to execute DCT in the reduced region. Not only this algorithm that I proposed reduces computational complexity of FDCT(Forward DCT) and IDCT(Inverse DCT) and decreases encoding time and decoding time. I let compressibility increase by accomplishing other stability verticality zigzag scan by the block size that was classified for each block at the time of huffman encoding each. In addition, the algorithm that I suggested reduces Run-Length by accomplishing the level verticality zigzag scan that is good for a classified block characteristic and, I offer the compressibility that improved thereby.

  • PDF