• Title/Summary/Keyword: Error sum

Search Result 492, Processing Time 0.025 seconds

Performance Analysis of Amplify-and-Forward Two-Way Relaying with Antenna Correlation

  • Fan, Zhangjun;Xu, Kun;Zhang, Bangning;Pan, Xiaofei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.6
    • /
    • pp.1606-1626
    • /
    • 2012
  • This paper investigates the performance of an amplify-and-forward (AF) two-way relaying system with antenna correlation. The system consists of two multiple-antenna sources, which exchange information via the aid of a single-antenna relay. In particular, we derive the exact outage probability expression. Furthermore, we provide a simple, tight closed-form lower bound for the outage probability. Based on the lower bound, we obtain the closed-form asymptotic outage probability and the average symbol error rate expressions at high signal-to-noise ratio (SNR), which reveal the system's diversity order and coding gain with antenna correlation. To investigate the system's throughput performance with antenna correlation, we also derive a closed-form lower bound for the average sum-rate, which is quite tight from medium to high SNR regime. The analytical results readily enable us to obtain insight into the effect of antenna correlation on the system's performance. Extensive Monte Carlo simulations are conducted to verify the analytical results.

A Kinetic Monte Carlo Simulation of Individual Site Type of Ethylene and α-Olefins Polymerization

  • Zarand, S.M. Ghafelebashi;Shahsavar, S.;Jozaghkar, M.R.
    • Journal of the Korean Chemical Society
    • /
    • v.62 no.3
    • /
    • pp.191-202
    • /
    • 2018
  • The aim of this work is to study Monte Carlo simulation of ethylene (co)polymerization over Ziegler-Natta catalyst as investigated by Chen et al. The results revealed that the Monte Carlo simulation was similar to sum square error (SSE) model to prediction of stage II and III of polymerization. In the case of activation stage (stage I) both model had slightly deviation from experimental results. The modeling results demonstrated that in homopolymerization, SSE was superior to predict polymerization rate in current stage while for copolymerization, Monte Carlo had preferable prediction. The Monte Carlo simulation approved the SSE results to determine role of each site in total polymerization rate and revealed that homopolymerization rate changed from site to site and order of center was different compared to copolymerization. The polymer yield was reduced by addition of hydrogen amount however there was no specific effect on uptake curve which was predicted by Monte Carlo simulation with good accuracy. In the case of copolymerization it was evolved that monomer chain length and monomer concentration influenced the rate of polymerization as rate of polymerization reduced from 1-hexene to 1-octene and increased when monomer concentration proliferate.

High-Performance and Low-Complexity Decoding of High-Weight LDPC Codes (높은 무게 LDPC 부호의 저복잡도 고성능 복호 알고리즘)

  • Cho, Jun-Ho;Sung, Won-Yong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.5C
    • /
    • pp.498-504
    • /
    • 2009
  • A high-performance low-complexity decoding algorithm for LDPC codes is proposed in this paper, which has the advantages of both bit-flipping (BF) algorithm and sum-product algorithm (SPA). The proposed soft bit-flipping algorithm requires only simple comparison and addition operations for computing the messages between bit and check nodes, and the amount of those operations is also small. By increasing the utilization ratio of the computed messages and by adopting nonuniform quantization, the signal-to-noise ratio (SNR) gap to the SPA is reduced to 0.4dB at the frame error rate of 10-4 with only 5-bit assignment for quantization. LDPC codes with high column or row weights, which are not suitable for the SPA decoding due to the complexity, can be practically implemented without much worsening the error performance.

A study on the approximation function for pairs of primes with difference 10 between consecutive primes (연속하는 두 소수의 차가 10인 소수 쌍에 대한 근사 함수에 대한 연구)

  • Lee, Heon-Soo
    • Journal of Internet of Things and Convergence
    • /
    • v.6 no.4
    • /
    • pp.49-57
    • /
    • 2020
  • In this paper, I provided an approximation function Li*2,10(x) using logarithm integral for the counting function π*2,10(x) of consecutive deca primes. Several personal computers and Mathematica were used to validate the approximation function Li*2,10(x). I found the real value of π*2,10(x) and approximate value of Li*2,10(x) for various x ≤ 1011. By the result of theses calculations, most of the error rates are margins of error of 0.005%. Also, I proved that the sum C2,10(∞) of reciprocals of all primes with difference 10 between primes is finite. To find C2,10(∞), I computed the sum C2,10(x) of reciprocals of all consecutive deca primes for various x ≤ 1011 and I estimate that C2,10(∞) probably lies in the range C2,10(∞)=0.4176±2.1×10-3.

Novel Motion Estimation Technique Based Error-Resilient Video Coding (새로운 움직임 예측기법 기반의 에러 내성이 있는 영상 부호화)

  • Hwang, Min-Cheol;Kim, Jun-Hyung;Ko, Sung-Jea
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.108-115
    • /
    • 2009
  • In this paper, we propose a novel true-motion estimation technique supporting efficient frame error concealment for error-resilient video coding. In general, it is important to accurately obtain the true-motion of objects in video sequences for effectively recovering the corrupted frame due to transmission errors. However, the conventional motion estimation (ME) technique, which minimizes a sum of absolute different (SAD) between pixels of the current block and the motion-compensated block, does not always reflect the true-movement of objects. To solve this problem, we introduce a new metric called an absolute difference of motion vectors (ADMV) which is the distance between motion vectors of the current block and its motion-compensated block. The proposed ME method can prevent unreliable motion vectors by minimizing the weighted combination of SAD and ADMV. In addition, the proposed ME method can significantly improve the performance of error concealment at the decoder since error concealment using the ADMV can effectively recover the missing motion vector without any information of the lost frame. Experimental results show that the proposed method provides similar coding efficiency to the conventional ME method and outperforms the existing error-resilient method.

Performance analysis and hardware design of LDPC Decoder for WiMAX using INMS algorithm (INMS 복호 알고리듬을 적용한 WiMAX용 LDPC 복호기의 성능분석 및 하드웨어 설계)

  • Seo, Jin-Ho;Shin, Kyung-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.10a
    • /
    • pp.229-232
    • /
    • 2012
  • This paper describes performance evaluation using fixed-point Matlab modeling and simulation, and hardware design of LDPC decoder which is based on Improved Normalized Min-Sum(INMS) decoding algorithm. The designed LDPC decoder supports 19 block lengths(576~2304) and 6 code rates(1/2, 2/3A, 2/3B, 3/4A, 3/4B, 5/6) of IEEE 802.16e mobile WiMAX standard. Considering hardware complexity, it is designed using a block-serial(partially parallel) architecture which is based on layered decoding scheme. A DFU based on sign-magnitude arithmetic is adopted to minimize hardware area. Hardware design is optimized by using INMS decoding algorithm whose performance is better than min-sum algorithm.

  • PDF

Non-Orthogonal Multiple Access (NOMA) to Enhance Capacity in 5G

  • Lim, Sungmook;Ko, Kyunbyoung
    • International Journal of Contents
    • /
    • v.11 no.4
    • /
    • pp.38-43
    • /
    • 2015
  • Non-orthogonal multiple access (NOMA) where all users share the entire time and frequency resource has paid attention as one of the key technologies to enhance the spectral efficiency and the total throughput. Nevertheless, as the number of users and SIC error increase, the inter-user interference and the residual interference due to the SIC error also increase, resulting in performance degradation. In order to mitigate the performance degradation, we propose grouping-based NOMA system. In the proposed scheme, all users are divided into two groups based on the distance between the BS and each user, where one utilizes the first half of the bandwidth and the other utilizes the rest in the orthogonal manner. On the other hand, users in each group share the spectrum in the non-orthogonal manner. Grouping users can reduce both the inter-user interference and residual interference due to the SIC error, so it can outperform conventional NOMA system, especially in case that the number of users and the SIC error increase. Based on that, we also present the hybrid operation of the conventional and the proposed NOMA systems. In numerical results, the total throughput of the proposed NOMA systems is compared with that of the conventional NOMA systems with regard to the number of users and SIC error. It is confirmed that the proposed NOMA system outperforms the conventional NOMA system as the number of users and the SIC error increase.

High Performance CNC Control Using a New Discrete-Time Variable Structure Control Method (새로운 이산시간 가변구조 제어방법을 이용한 CNC의 고성능 제어)

  • Oh, Seung-Hyun;Kim, Jung-ho;Cho, Dong-il
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.12
    • /
    • pp.1053-1060
    • /
    • 2000
  • In this paper, a discrete-time variable structure control method using recursively defined switching function and a decoupled variable structure disturbance compensator is used to achieve high performance circular motion control of a CNC machining center. The discrete-time variable structure control with the decoupled disturbance compensator method developed in this paper uses a recursive switching function defined as the sum of the current tracking error vector and the previous value of the switching function multiplied by a positive constant less than one. This recursive switching function provides much improved performance compared to the method that uses a switching function defined only as a linear combination of the current tracking error. Enhancements in tracking performance are demonstrated in the circular motion control using a CNC milling machine.

  • PDF

Elemental Image Generation Method with the Correction of Mismatch Error by Sub-pixel Sampling between Lens and Pixel in Integral Imaging

  • Kim, Jonghyun;Jung, Jae-Hyun;Hong, Jisoo;Yeom, Jiwoon;Lee, Byoungho
    • Journal of the Optical Society of Korea
    • /
    • v.16 no.1
    • /
    • pp.29-35
    • /
    • 2012
  • We propose a subpixel scale elemental image generation method to correct the errors created by finite display pixel size in integral imaging. In this paper, two errors are mainly discussed: pickup-and-display mismatch error and mismatch error between pixel pitch and lens pitch. The proposed method considers the relative positions between lenses and pixels in subpixel scale. Our proposed pickup method calculates the position parameters, generates an elemental image with pixels completely inside the lens, and generates an elemental image with border pixels using a weighted sum method. Appropriate experiments are presented to verify the validity of the proposed method.

Performance Improvement ofSpeech Recognition Based on SPLICEin Noisy Environments (SPLICE 방법에 기반한 잡음 환경에서의 음성 인식 성능 향상)

  • Kim, Jong-Hyeon;Song, Hwa-Jeon;Lee, Jong-Seok;Kim, Hyung-Soon
    • MALSORI
    • /
    • no.53
    • /
    • pp.103-118
    • /
    • 2005
  • The performance of speech recognition system is degraded by mismatch between training and test environments. Recently, Stereo-based Piecewise LInear Compensation for Environments (SPLICE) was introduced to overcome environmental mismatch using stereo data. In this paper, we propose several methods to improve the conventional SPLICE and evaluate them in the Aurora2 task. We generalize SPLICE to compensate for covariance matrix as well as mean vector in the feature space, and thereby yielding the error rate reduction of 48.93%. We also employ the weighted sum of correction vectors using posterior probabilities of all Gaussians, and the error rate reduction of 48.62% is achieved. With the combination of the above two methods, the error rate is reduced by 49.61% from the Aurora2 baseline system.

  • PDF