• Title/Summary/Keyword: Maximum Likelihood Detection

Search Result 250, Processing Time 0.027 seconds

Reduced Complexity QRD-M Algorithm for Spatial Multiplexing MIMO-OFDM Systems (공간 다중화 MIMO-OFDM 시스템을 위한 복잡도 감소 QRD-M 알고리즘)

  • Mohaisen, Manar;An, Hong-Sun;Chang, Kyung-Hi
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.4C
    • /
    • pp.460-468
    • /
    • 2009
  • Multiple-input multiple-output (MIMO) technology applied with orthogonal frequency division multiplexing (OFDM) is considered as the ultimate solution to increase channel capacity without any additional spectral resources. At the receiver side, the challenge resides in designing low complexity detection algorithms capable of separating independent streams sent simultaneously from different antennas. In this paper, we introduce an upper-lower bounded-complexity QRD-M algorithm (ULBC QRD-M). In the proposed algorithm we solve the problem of high extreme complexity of the conventional sphere decoding by fixing the upper bound complexity to that of the conventional QRD-M. On the other hand, ULBC QRD-M intelligently cancels all unnecessary hypotheses to achieve very low computational requirements. Analyses and simulation results show that the proposed algorithm achieves the performance of conventional QRD-M with only 26% of the required computations.

A Design of Viterbi Decoder by State Transition Double Detection Method for Mobile Communication (상태천이 이중검색방식의 이동통신용 Viterbi 디코더 설계)

  • 김용노;이상곤;정은택;류흥균
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.4
    • /
    • pp.712-720
    • /
    • 1994
  • In digital mobile communication systems, the convolutional coding is considered as the optimum error correcting scheme. Recently, the Viterbi algorithm is widely used for the decoding of convolutional code. Most Viterbi decoder has been proposed in conde rate R=1/2 or 2/3 with memory components (m) less than 3. which degrades the error correcting capability because of small code constraints (K). We consider the design method for typical code rate R=1/2, K=7(171,133) convolutional code with memory components, m=6. In this paper, a novel construction method is presented which combines maximum likelihood decoding with a state transition double detection and comparison method. And the designed circuit has the error-correcting capability of random 2 bit error. As the results of logic simulation, it is shown that the proposed Viterbi decoder exactly corrects 1 bit and 2 bit error signal.

  • PDF

Systolic Arrays for Lattice-Reduction-Aided MIMO Detection

  • Wang, Ni-Chun;Biglieri, Ezio;Yao, Kung
    • Journal of Communications and Networks
    • /
    • v.13 no.5
    • /
    • pp.481-493
    • /
    • 2011
  • Multiple-input multiple-output (MIMO) technology provides high data rate and enhanced quality of service for wireless communications. Since the benefits from MIMO result in a heavy computational load in detectors, the design of low-complexity suboptimum receivers is currently an active area of research. Lattice-reduction-aided detection (LRAD) has been shown to be an effective low-complexity method with near-maximum-likelihood performance. In this paper, we advocate the use of systolic array architectures for MIMO receivers, and in particular we exhibit one of them based on LRAD. The "Lenstra-Lenstra-Lov$\acute{a}$sz (LLL) lattice reduction algorithm" and the ensuing linear detections or successive spatial-interference cancellations can be located in the same array, which is considerably hardware-efficient. Since the conventional form of the LLL algorithm is not immediately suitable for parallel processing, two modified LLL algorithms are considered here for the systolic array. LLL algorithm with full-size reduction-LLL is one of the versions more suitable for parallel processing. Another variant is the all-swap lattice-reduction (ASLR) algorithm for complex-valued lattices, which processes all lattice basis vectors simultaneously within one iteration. Our novel systolic array can operate both algorithms with different external logic controls. In order to simplify the systolic array design, we replace the Lov$\acute{a}$sz condition in the definition of LLL-reduced lattice with the looser Siegel condition. Simulation results show that for LR-aided linear detections, the bit-error-rate performance is still maintained with this relaxation. Comparisons between the two algorithms in terms of bit-error-rate performance, and average field-programmable gate array processing time in the systolic array are made, which shows that ASLR is a better choice for a systolic architecture, especially for systems with a large number of antennas.

New Decision Rules for UWB Synchronization (UWB 동기화를 위한 새로운 결정 법칙들)

  • Chong, Da-Hae;Lee, Young-Yoon;Ahn, Sang-Ho;Lee, Eui-Hyoung;Yoo, Seung-Hwan;Yoon, Seok-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.2C
    • /
    • pp.192-199
    • /
    • 2008
  • In ultra-wideband (UWB) systems, conventionally, the synchronization is to align time phases of a locally generated template and any of multipath components to within an allowable range. However, the synchronization with a low-power multipath component could incur significant performance degradation in receiver operation (e.g., detection) after the synchronization. On the other hand, the synchronization with a high-power multipath component can improve the performance in receiver operation after the synchronization. Generally, the first one among multipath components has the largest power. Thus, the synchronization with the first path component can make better performance than that with low-power component in receiver operation after the synchronization, Based on which, we first propose an optimal decision rule based on a maximum likelihood (ML) approach, and then, develope a simpler suboptimal decision rule for selecting the first path component. Simulation results show that the system has good demodulation performance, which uses new synchronization definition and the proposed decision rules have better performance than that of the conventional decision rule in UWB multipath channels. Between macroblocks in the previous and the current frame. On video samples with high motion and scene change cases, experimental results show that (1) the proposed algorithm adapts the encoded bitstream to limited channel capacity, while existing algorithms abruptly excess the limit bit rate; (2) the proposed algorithm improves picture quality with $0.4{\sim}0.9$dB in average.

New Hierarchical Modulation Scheme Using a Constellation Rotation Method (성상회전 변조기법을 이용한 새로운 계층변조 기법)

  • Kim, Hojun;Shang, Yulong;Park, Jaehyung;Jung, Taejin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.1
    • /
    • pp.66-76
    • /
    • 2016
  • In this paper, we propose a new hierarchical modulation scheme for DVB-NGH to improve the performance of LP (Low-Parity) signals by applying a conventional constellation-rotation method to the LP signals without virtually a loss of performance of a HP (High-Parity) signals. The improvement of the LP signals is mainly due to the increased divesity gain caused by the constellation-rotation method which barely affect the performance of the HP signals. For the new scheme, we also propose a hardware-efficient ML (Maximum-Likelihood) detection algorithm that first decodes the HP signals by using a conventional HP receiver, and then simply decodes the precoded LP signals based on the pre-detected HP signals.

Signal Detection with Sphere Decoding Algorithm at MIMO Channel (MIMO채널에서 Sphere Decoding 알고리즘을 이용한 신호검파)

  • An, Jin-Young;Kang, Yun-Jeong;Kim, Sang-Choon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.10
    • /
    • pp.2197-2204
    • /
    • 2009
  • In this paper, we analyze the performance of the sphere decoding algorithm at MIMO system. The BER performance of this algorithm is the same as that of ML receiver, but computational complexity of SD algorithm is much less than that of ML receiver. The independent signals from each transmit antennas are modulated by using the QPSK and 16QAM modulation in the richly scattered Rayleigh flat-fading channel. The received signals from each receivers is independently detected by the receiver using Fincke & Pohst SD algorithm, and the BER output of the algorithm is compared with those of ZF, MMSE, SIC, and ML receivers. We also investigate the Viterbo & Boutros SD algorithm which is the modified SD algorithm, and the BER performance and the floting point operations of the algorithms are comparatively studied.

The Study for Performance Analysis of Software Reliability Model using Fault Detection Rate based on Logarithmic and Exponential Type (로그 및 지수형 결함 발생률에 따른 소프트웨어 신뢰성 모형에 관한 신뢰도 성능분석 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.3
    • /
    • pp.306-311
    • /
    • 2016
  • Software reliability in the software development process is an important issue. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, reliability software cost model considering logarithmic and exponential fault detection rate based on observations from the process of software product testing was studied. Adding new fault probability using the Goel-Okumoto model that is widely used in the field of reliability problems presented. When correcting or modifying the software, finite failure non-homogeneous Poisson process model. For analysis of software reliability model considering the time-dependent fault detection rate, the parameters estimation using maximum likelihood estimation of inter-failure time data was made. The logarithmic and exponential fault detection model is also efficient in terms of reliability because it (the coefficient of determination is 80% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, the software developers have to consider life distribution by prior knowledge of the software to identify failure modes which can be able to help.

Trace-Back Viterbi Decoder with Sequential State Transition Control (순서적 역방향 상태천이 제어에 의한 역추적 비터비 디코더)

  • 정차근
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.40 no.11
    • /
    • pp.51-62
    • /
    • 2003
  • This paper presents a novel survivor memeory management and decoding techniques with sequential backward state transition control in the trace back Viterbi decoder. The Viterbi algorithm is an maximum likelihood decoding scheme to estimate the likelihood of encoder state for channel error detection and correction. This scheme is applied to a broad range of digital communication such as intersymbol interference removing and channel equalization. In order to achieve the area-efficiency VLSI chip design with high throughput in the Viterbi decoder in which recursive operation is implied, more research is required to obtain a simple systematic parallel ACS architecture and surviver memory management. As a method of solution to the problem, this paper addresses a progressive decoding algorithm with sequential backward state transition control in the trace back Viterbi decoder. Compared to the conventional trace back decoding techniques, the required total memory can be greatly reduced in the proposed method. Furthermore, the proposed method can be implemented with a simple pipelined structure with systolic array type architecture. The implementation of the peripheral logic circuit for the control of memory access is not required, and memory access bandwidth can be reduced Therefore, the proposed method has characteristics of high area-efficiency and low power consumption with high throughput. Finally, the examples of decoding results for the received data with channel noise and application result are provided to evaluate the efficiency of the proposed method.

Performance Enhancement by Scaling Soft Bit Information of APSK (APSK 변조 방식에 대한 연판정 출력의 스케일링을 통한 성능 개선)

  • Zhang, Meixiang;Kim, Sooyoung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.858-866
    • /
    • 2013
  • In the DVB-S2, which is the technical specification of the second generation digital video broadcasting via satellite, APSK modulation scheme along with LDPC coding schemes are defined. APSK is a multi-lelvel PSK modulation scheme and decoding of LDPC coded signal requires soft decision information. Therefore, the APSK demodulator at the receiver should have capability of estimating soft information. In this paper, we introduce a method to estimate soft information by using simple distance estimation, and show that this method overestimates the soft information. Subsequently, this overestimated soft information leads to performance degradation. In order to overcome this problem, we propose a scaling method to improve the performance at the receiver In addition, we show that the proposed scaling scheme enables us to estimate the soft information with linear order complexity and produce the performance close to the maximum likelihood detection.

Estimation of Motion-Blur Parameters Based on a Stochastic Peak Trace Algorithm (통계적 극점 자취 알고리즘에 기초한 움직임 열화 영상의 파라메터 추출)

  • 최병철;홍훈섭;강문기
    • Journal of Broadcast Engineering
    • /
    • v.5 no.2
    • /
    • pp.281-289
    • /
    • 2000
  • While acquiring images, the relative motion between the imaging device and the object scene seriously damages the image quality. This phenomenon is called motion blur. The peak-trace approach, which is our recent previous work, identifies important parameters to characterize the point spread function (PSF) of the blur, given only the blurred image itself. With the peak-trace approach the direction of the motion blur can be extracted regardless of the noise corruption and does not need much Processing time. In this paper stochastic peak-trace approaches are introduced. The erroneous data can be selected through the ML classification, and can be made small through weighting. Therefore the distortion of the direction in the low frequency region can be prevented. Using the linear prediction method, the irregular data are prohibited from being selected as the peak point. The detection of the second peak using the proposed moving average least mean (MALM) method is used in the Identification of the motion extent. The MALM method itself includes a noise removal process, so it is possible to extract the parameters even an environment of heavy noise. In the experiment, we could efficiently restore the degraded image using the information obtained by the proposed algorithm.

  • PDF