• Title/Summary/Keyword: Error-control coding

Search Result 123, Processing Time 0.026 seconds

A Simple Element Inverse Jacket Transform Coding (단순한 엘레멘트 인버스 재킷 변환 부호화)

  • Lee, Kwang-Jae;Park, Ju-Yong;Lee, Moon-Ho
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.1
    • /
    • pp.132-137
    • /
    • 2007
  • Jacket transforms are a class of transforms which are simple to calculate, easily inverted and are size-flexible. Previously reported jacket transforms were generalizations of the well-known Walsh-Hadamard transform (WHT) and the center-weighted Hadamard transform (CWHT). In this paper we present a new class of jacket transform not derived from either the WHT or the CWHT. This class of transform can be applied to any even length vector, and is applicable to finite fields and is useful for constructing error control codes.

Selective Decoding Schemes and Wireless MAC Operating in MIMO Ad Hoc Networks

  • Suleesathira, Raungrong;Aksiripipatkul, Jansilp
    • Journal of Communications and Networks
    • /
    • v.13 no.5
    • /
    • pp.421-427
    • /
    • 2011
  • Problems encountered in IEEE 802.11 medium access control (MAC) design are interferences from neighboring or hidden nodes and collision from simultaneous transmissions within the same contention floors. This paper presents the selective decoding schemes in MAC protocol for multiple input multiple output ad-hoc networks. It is able to mitigate interferences by using a developed minimum mean-squared error technique. This interference mitigation combined with the maximum likelihood decoding schemes for the Alamouti coding enables the receiver to decode and differentiate the desired data streams from co-channel data streams. As a result, it allows a pair of simultaneous transmissions to the same or different nodes which yields the network utilization increase. Moreover, the presented three decoding schemes and time line operations are optimally selected corresponding to the transmission demand of neighboring nodes to avoid collision. The selection is determined by the number of request to send (RTS) packets and the type of clear to send packets. Both theoretical channel capacity and simulation results show that the proposed selective decoding scheme MAC protocol outperforms the mitigation interference using multiple antennas and the parallel RTS processing protocols for the cases of (1) single data stream and (2) two independent data streams which are simultaneously transmitted by two independent transmitters.

Performance Evaluation of Underwater Code Division Multiple Access Scheme on Forward-Link through Water-Tank and Lake Experiment (수조 및 저수지 실험을 통한 수중 코드 분할 다중 접속 기법 순방향 링크 성능 분석)

  • Seo, Bo-Min;Son, Kweon;Cho, Ho-Shin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.2
    • /
    • pp.199-208
    • /
    • 2014
  • Code division multiple access (CDMA) is one of the promising medium access control (MAC) schemes for underwater acoustic sensor networks because of its robustness against frequency-selective fading and high frequency-reuse efficiency. As a way of performance evaluation, sea or lake experiment has been employed along with computer simulation.. In this study, we design the underwater CDMA forward-link transceiver and evaluate the feasibility aginst harsh underwater acoustic channel in water-tank first. Then, based on the water-tank experiment results, we improved the transceiver and showed the improvements in a lake experiment. A pseudo random noise code acquisition process is added for phase error correction before decoding the user data by means of a Walsh code in the receiver. Interleaving and convolutional channel coding scheme are also used for performance improvement. Experimental results show that the multiplexed data is recovered by means of demultiplexing at receivers with error-free in case of two users while with less than 15% bit error rate in case of three and four users.

Computer Vision Based Measurement, Error Analysis and Calibration (컴퓨터 시각(視覺)에 의거한 측정기술(測定技術) 및 측정오차(測定誤差)의 분석(分析)과 보정(補正))

  • Hwang, H.;Lee, C.H.
    • Journal of Biosystems Engineering
    • /
    • v.17 no.1
    • /
    • pp.65-78
    • /
    • 1992
  • When using a computer vision system for a measurement, the geometrically distorted input image usually restricts the site and size of the measuring window. A geometrically distorted image caused by the image sensing and processing hardware degrades the accuracy of the visual measurement and prohibits the arbitrary selection of the measuring scope. Therefore, an image calibration is inevitable to improve the measuring accuracy. A calibration process is usually done via four steps such as measurement, modeling, parameter estimation, and compensation. In this paper, the efficient error calibration technique of a geometrically distorted input image was developed using a neural network. After calibrating a unit pixel, the distorted image was compensated by training CMLAN(Cerebellar Model Linear Associator Network) without modeling the behavior of any system element. The input/output training pairs for the network was obtained by processing the image of the devised sampled pattern. The generalization property of the network successfully compensates the distortion errors of the untrained arbitrary pixel points on the image space. The error convergence of the trained network with respect to the network control parameters were also presented. The compensated image through the network was then post processed using a simple DDA(Digital Differential Analyzer) to avoid the pixel disconnectivity. The compensation effect was verified using known sized geometric primitives. A way to extract directly a real scaled geometric quantity of the object from the 8-directional chain coding was also devised and coded. Since the developed calibration algorithm does not require any knowledge of modeling system elements and estimating parameters, it can be applied simply to any image processing system. Furthermore, it efficiently enhances the measurement accuracy and allows the arbitrary sizing and locating of the measuring window. The applied and developed algorithms were coded as a menu driven way using MS-C language Ver. 6.0, PC VISION PLUS library functions, and VGA graphic functions.

  • PDF

Forward rate control of MPEG-2 video based on distortion-rate estimation (왜곡-비트율 추정에 근거한 MPEG-2 비디오의 순방향 비트율 제어)

  • 홍성훈;김성대;최재각;홍성용
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.8
    • /
    • pp.2010-2024
    • /
    • 1998
  • In video coding, it is important to improve the average picture quality as well as to maintain cosistent picture quality between consecutive pictures. In this paper, we propose a distortion-rate estimation method for MPEG-2 video and a forward rate control method, using the proposed estimation result, to be able to obtain the improved and consistent picture quality of CBR (Constant Bit Rate) encoded MPEG-2 video. The proposed distortion-rate estimation enable us to predict the distortion and the bits generated from an encoded picture at a given quantization step size and vice versa. The most attactive features of proposed distortion-rate estimation are its accuracy and low computational complexity enough to be applied to the practical video coding. In addition, the proposed rate control first determined a quantization parameter per frame by following procedure: distortion-rate estimation, target bit allocation, distortion constraint and VBV(Video Buffer Verification) constraint. And then this quantization parameter is applied to the encoding so that improved and consisten picture quality can be obtained. Furthermore the proposed rate control method can solve the error propagation problem caused by scene change or anchor picture degradation by using the B-picture skipping and the guarantee of the minimum bit allocation for the anchor picture. Experimental results, comparing the proposed forward rate control method with TM5 method, show that the proposed method makes more improed and consistent picture quality than TM5.

  • PDF

Recurrent Neural Network with Backpropagation Through Time Learning Algorithm for Arabic Phoneme Recognition

  • Ismail, Saliza;Ahmad, Abdul Manan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1033-1036
    • /
    • 2004
  • The study on speech recognition and understanding has been done for many years. In this paper, we propose a new type of recurrent neural network architecture for speech recognition, in which each output unit is connected to itself and is also fully connected to other output units and all hidden units [1]. Besides that, we also proposed the new architecture and the learning algorithm of recurrent neural network such as Backpropagation Through Time (BPTT, which well-suited. The aim of the study was to observe the difference of Arabic's alphabet like "alif" until "ya". The purpose of this research is to upgrade the people's knowledge and understanding on Arabic's alphabet or word by using Recurrent Neural Network (RNN) and Backpropagation Through Time (BPTT) learning algorithm. 4 speakers (a mixture of male and female) are trained in quiet environment. Neural network is well-known as a technique that has the ability to classified nonlinear problem. Today, lots of researches have been done in applying Neural Network towards the solution of speech recognition [2] such as Arabic. The Arabic language offers a number of challenges for speech recognition [3]. Even through positive results have been obtained from the continuous study, research on minimizing the error rate is still gaining lots attention. This research utilizes Recurrent Neural Network, one of Neural Network technique to observe the difference of alphabet "alif" until "ya".

  • PDF

Design guides for enhancing finger tactile recognition of plastic icon shapes (플라스틱 아이콘 형상의 손가락 촉지각률 향상을 위한 설계 가이드)

  • Kim, Huhn;Lee, Won Y.
    • Design & Manufacturing
    • /
    • v.6 no.2
    • /
    • pp.59-63
    • /
    • 2012
  • In various industries, tactile recognition has been one of the important ways in displaying information because peoples like to touch and feel. Especially, how much the tactile information is efficiently recognizable is crucial for visually impaired persons in their daily lifes. However, existing design guidelines are insufficient to lead good tactile recognition. In this study, an experiment was performed to investigate proper tactile shapes (relievo / intaglio vs. filled / unfilled), sizes and depths for efficient tactile recognition. Moreover, this study scrutinized whether the recognition speed or error was varied depending on the type of displayed symbols (open vs. closed types) in tactile. The experimental results revealed that the 'relieve-filled' shape type was more rapidly recognizable than the other shapes, and the 'closed' type symbols (e.g., ${\square }$. ${\bigcirc}$) were more robustly recognizable than the 'open' type symbols (e.g, +, ^). Several design guidelines were presented based on the results. These guidelines can be applied to the design of tactile buttons in the devices that users should control them without visual attention, such as car steering wheels or MP3 players.

  • PDF

Fast Quadtree Based Normalized Cross Correlation Method for Fractal Video Compression using FFT

  • Chaudhari, R.E.;Dhok, S.B.
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.2
    • /
    • pp.519-528
    • /
    • 2016
  • In order to achieve fast computational speed with good visual quality of output video, we propose a frequency domain based new fractal video compression scheme. Normalized cross correlation is used to find the structural self similar domain block for the input range block. To increase the searching speed, cross correlation is implemented in the frequency domain using FFT with one computational operation for all the domain blocks instead of individual block wise calculations. The encoding time is further minimized by applying rotation and reflection DFT properties to the IFFT of zero padded range blocks. The energy of overlap small size domain blocks is pre-computed for the entire reference frame and retaining the energies of the overlapped search window portion of previous adjacent block. Quadtree decompositions are obtained by using domain block motion compensated prediction error as a threshold to control the further partitions of the block. It provides a better level of adaption to the scene contents than fixed block size approach. The result shows that, on average, the proposed method can raise the encoding speed by 48.8 % and 90 % higher than NHEXS and CPM/NCIM algorithms respectively. The compression ratio and PSNR of the proposed method is increased by 15.41 and 0.89 dB higher than that of NHEXS on average. For low bit rate videos, the proposed algorithm achieve the high compression ratio above 120 with more than 31 dB PSNR.

The Structure and Performance of Turbo decoder using Sliding-window method (슬라이딩 윈도우 방식의 터보 복호화기의 구조 및 성능)

  • 심병효;구창설;이봉운
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.3 no.1
    • /
    • pp.116-126
    • /
    • 2000
  • Turbo codes are the most exciting and potentially important development in coding theory in recent years. They were introduced in 1993 by Berrou, Glavieux and $Thitimajshima,({(1)}$ and claimed to achieve near Shannon-limit error correction performance with relatively simple component codes and large interleavers. A required Eb/N0 of 0.7㏈ was reported for BER of $10^{-5}$ and code rate of $l/2.^{(1)}$ However, to implement the turbo code system, there are various important details that are necessary to reproduce these results such as AGC gain control, optimal wordlength determination, and metric rescaling. Further, the memory required to implement MAP-based turbo decoder is relatively considerable. In this paper, we confirmed the accuracy of these claims by computer simulation considering these points, and presented a optimal wordlength for Turbo code design. First, based on the analysis and simulation of the turbo decoder, we determined an optimal wordlength of Turbo decoder. Second, we suggested the MAP decoding algorithm based on sliding-window method which reduces the system memory significantly. By computer simulation, we could demonstrate that the suggested fixed-point Turbo decoder operates well with negligible performance loss.

  • PDF

Design of Montgomery Algorithm and Hardware Architecture over Finite Fields (유한 체상의 몽고메리 알고리즘 및 하드웨어 구조 설계)

  • Kim, Kee-Won;Jeon, Jun-Cheol
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.18 no.2
    • /
    • pp.41-46
    • /
    • 2013
  • Finite field multipliers are the basic building blocks in many applications such as error-control coding, cryptography and digital signal processing. Recently, many semi-systolic architectures have been proposed for multiplications over finite fields. Also, Montgomery multiplication algorithm is well known as an efficient arithmetic algorithm. In this paper, we induce an efficient multiplication algorithm and propose an efficient semi-systolic Montgomery multiplier based on polynomial basis. We select an ideal Montgomery factor which is suitable for parallel computation, so our architecture is divided into two parts which can be computed simultaneously. In analysis, our architecture reduces 30%~50% of time complexity compared to typical architectures.