• Title/Summary/Keyword: encoding table

Search Result 55, Processing Time 0.027 seconds

An Encoding Method for Presentation of ISO 19848 Data Channel and Management of Ship Equipment Failure-Maintenance Types (ISO 19848 데이터 채널 표현과 선박 기관장비 고장·유지보수 유형 관리를 위한 코드화 기법)

  • Hwang, Hun-Gyu;Woo, Yun-Tae;Kim, Bae-Sung;Shin, Il-Sik;Lee, Jang-Se
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.134-137
    • /
    • 2020
  • Recently, there are emphasized to support the maintenance and management system of vessels using acquired data from engine part equipment. But, there are limitations for data exchange and management. To solve the problem, the ISO published ISO 19847 and 19848. In this paper, we analyze the ISO 19848 requirements related to identify data channel ID for ship equipment, and propose the examples for applying encoding techniques. In addition, we suggest the proposed technique for applying of managing the failure and maintenance type of the ship's engine part facilities by examples. If this method is applied, the vessel's equipment can exchange data through the sharing of the code table, and express what response is needed or acted, including where the failure occurred.

Design of Efficient Memory Architecture for Coeff_Token Encoding in H.264/AVC Video Coding Standard (H.264/AVC 동영상 압축 표준에서 Coeff_token 부호화를 위한 효율적임 메모리 구조 설계)

  • Moon, Yong Ho;Park, Kyoung Choon;Ha, Seok Wun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.5 no.2
    • /
    • pp.77-83
    • /
    • 2010
  • In this paper, we propose an efficient memory architecture for coeff_token encoding in H.264/AVC standard. The VLCTs used to encode the coeff_token syntax element are implemented with the memory. In general, the size of memory must be reduced because it affects the cost and operation speed of the system. Based on the analysis for the codewords in VLCTs, new memory architecture is designed in this paper. The proposed memory architecture results in about 24% memory saving, compared to the conventional memory architecture.

An Internal Pattern Run-Length Methodology for Slice Encoding

  • Lee, Lung-Jen;Tseng, Wang-Dauh;Lin, Rung-Bin
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.374-381
    • /
    • 2011
  • A simple and effective compression method is proposed for multiple-scan testing. For a given test set, each test pattern is compressed from the view of slices. An encoding table exploiting seven types of frequently-occurring pattern is used. Compression is then achieved by mapping slice data into codewords. The decompression logic is small and easy to implement. It is also applicable to schemes adopting a single-scan chain. Experimental results show this method can achieve good compression effect.

Efficient Transform-Domain Noise Reduction for H.264 Video Encoding (H.264 동영상 부호화를 위한 효과적인 주파수 영역 잡음 제거)

  • Song, Byung-Cheol
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.501-508
    • /
    • 2009
  • This paper proposes an efficient transform-domain noise reduction scheme in an H.264 video encoder, where the generalized Wiener filtering is performed in a quantization process by multiplying each transform block with its adaptive multiplication factor. In practice, the computational complexity of the proposed scheme is negligible by replacing the multiplication operation with a simple look-up table method. Also, experimental results show that the proposed scheme provides outstanding noise reduction performance in an H.264 video encoder.

Table based Single Pass Algorithm for Clustering News Articles

  • Jo, Tae-Ho
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.3
    • /
    • pp.231-237
    • /
    • 2008
  • This research proposes a modified version of single pass algorithm specialized for text clustering. Encoding documents into numerical vectors for using the traditional version of single pass algorithm causes the two main problems: huge dimensionality and sparse distribution. Therefore, in order to address the two problems, this research modifies the single pass algorithm into its version where documents are encoded into not numerical vectors but other forms. In the proposed version, documents are mapped into tables and the operation on two tables is defined for using the single pass algorithm. The goal of this research is to improve the performance of single pass algorithm for text clustering by modifying it into the specialized version.

Efficient generation of CGH using statistical redundancy of 3-D images

  • Kim, Seung-Cheol;Kim, Eun-Soo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.722-725
    • /
    • 2008
  • In this paper, we propose a new approach for fast generation of CGHs of a 3-D object by using the run-length encoding and N-LUT methods. In this approach, object points to be involved in calculation of the CGH pattern can be dramatically reduced and as a result a significant increase of computational speed can be obtained.

  • PDF

The Minimum PAPR Code for OFDM Systems

  • Kang, Seog-Geun
    • ETRI Journal
    • /
    • v.28 no.2
    • /
    • pp.235-238
    • /
    • 2006
  • In this letter, a block code that minimizes the peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) signals is proposed. It is shown that peak envelope power is invariant to cyclic shift and codeword inversion. The systematic encoding rule for the proposed code is composed of searching for a seed codeword, shifting the register elements, and determining codeword inversion. This eliminates the look-up table for one-to-one correspondence between the source and the coded data. Computer simulation confirms that OFDM systems with the proposed code always have the minimum PAPR.

  • PDF

A Vector-Perturbation Based Lattice-Reduction using look-Up Table (격자 감소 기반 전부호화 기법에서의 효율적인 Look-Up Table 생성 방법)

  • Han, Jae-Won;Park, Dae-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.6A
    • /
    • pp.551-557
    • /
    • 2011
  • We investigate lattice-reduction-aided precoding techniques using Look-Up table (LUT) for multi-user multiple-input multiple-output(MIMO) systems. Lattice-reduction-aided vector perturbation (VP) gives large sum capacity with low encoding complexity. Nevertheless lattice-reduction process based on the LLL-Algorithm still requires high computational complexity since it involves several iterations of size reduction and column vector exchange. In this paper, we apply the LUT-aided lattice reduction on VP and propose a scheme to generate the LUT efficiently. Simulation results show that a proposed scheme has similar orthogonality defect and Bit-Error-Rate(BER) even with lower memory size.

A Fast Encoding Algorithm for Image Vector Quantization Based on Prior Test of Multiple Features (복수 특징의 사전 검사에 의한 영상 벡터양자화의 고속 부호화 기법)

  • Ryu Chul-hyung;Ra Sung-woong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.12C
    • /
    • pp.1231-1238
    • /
    • 2005
  • This paper presents a new fast encoding algorithm for image vector quantization that incorporates the partial distances of multiple features with a multidimensional look-up table (LUT). Although the methods which were proposed earlier use the multiple features, they handles the multiple features step by step in terms of searching order and calculating process. On the other hand, the proposed algorithm utilizes these features simultaneously with the LUT. This paper completely describes how to build the LUT with considering the boundary effect for feasible memory cost and how to terminate the current search by utilizing partial distances of the LUT Simulation results confirm the effectiveness of the proposed algorithm. When the codebook size is 256, the computational complexity of the proposed algorithm can be reduced by up to the $70\%$ of the operations required by the recently proposed alternatives such as the ordered Hadamard transform partial distance search (OHTPDS), the modified $L_2-norm$ pyramid ($M-L_2NP$), etc. With feasible preprocessing time and memory cost, the proposed algorithm reduces the computational complexity to below the $2.2\%$ of those required for the exhaustive full search (EFS) algorithm while preserving the same encoding quality as that of the EFS algorithm.

The Influence of Quantization Table in view of Information Hiding Techniques Modifying Coefficients in Frequency Domain (주파수 영역 계수 변경을 이용한 정보은닉기술에서의 양자화 테이블의 영향력)

  • Choi, Yong-Soo;Kim, Hyoung-Joong;Park, Chun-Myoung
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.1
    • /
    • pp.56-63
    • /
    • 2009
  • Nowdays, Most of Internet Contents delivered as a compressed file. It gives many advantages like deduction of communication bandwidth and transmission time etc. In case of JPEG Compression, Quantization is the most important procedure which accomplish the compression. In general signal processing, Quantization is the process which converts continuous analog signal to discrete digital signal. As you known already, Quantization over JPEG compression is to reduce magnitude of pixel value in spatial domain or coefficient in frequency domain. A lot of Data Hiding algorithms also developed to applicable for those compressed files. In this paper, we are going to unveil the influence of quantization table which used in the process of JPEG compression. Even thought most of algorithm modify frequency coefficients with considering image quality, they are ignoring the influence of quantization factor corresponding with the modified frequency coefficient. If existing algorithm adapt this result, they can easily evaluate their performances.