• Title/Summary/Keyword: information theoretic learning (ITL)

Search Result 8, Processing Time 0.027 seconds

Maximization of Zero-Error Probability for Adaptive Channel Equalization

  • Kim, Nam-Yong;Jeong, Kyu-Hwa;Yang, Liuqing
    • Journal of Communications and Networks
    • /
    • v.12 no.5
    • /
    • pp.459-465
    • /
    • 2010
  • A new blind equalization algorithm that is based on maximizing the probability that the constant modulus errors concentrate near zero is proposed. The cost function of the proposed algorithm is to maximize the probability that the equalizer output power is equal to the constant modulus of the transmitted symbols. Two blind information-theoretic learning (ITL) algorithms based on constant modulus error signals are also introduced: One for minimizing the Euclidean probability density function distance and the other for minimizing the constant modulus error entropy. The relations between the algorithms and their characteristics are investigated, and their performance is compared and analyzed through simulations in multi-path channel environments. The proposed algorithm has a lower computational complexity and a faster convergence speed than the other ITL algorithms that are based on a constant modulus error. The error samples of the proposed blind algorithm exhibit more concentrated density functions and superior error rate performance in severe multi-path channel environments when compared with the other algorithms.

A Study on Kernel Size Adaptation for Correntropy-based Learning Algorithms (코렌트로피 기반 학습 알고리듬의 커널 사이즈에 관한 연구)

  • Kim, Namyong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.714-720
    • /
    • 2021
  • The ITL (information theoretic learning) based on the kernel density estimation method that has successfully been applied to machine learning and signal processing applications has a drawback of severe sensitiveness in choosing proper kernel sizes. For the maximization of correntropy criterion (MCC) as one of the ITL-type criteria, several methods of adapting the remaining kernel size ( ) after removing the term have been studied. In this paper, it is shown that the main cause of sensitivity in choosing the kernel size derives from the term and that the adaptive adjustment of in the remaining terms leads to approach the absolute value of error, which prevents the weight adjustment from continuing. Thus, it is proposed that choosing an appropriate constant as the kernel size for the remaining terms is more effective. In addition, the experiment results when compared to the conventional algorithm show that the proposed method enhances learning performance by about 2dB of steady state MSE with the same convergence rate. In an experiment for channel models, the proposed method enhances performance by 4 dB so that the proposed method is more suitable for more complex or inferior conditions.

A New Adaptive Kernel Estimation Method for Correntropy Equalizers (코렌트로피 이퀄라이져를 위한 새로운 커널 사이즈 적응 추정 방법)

  • Kim, Namyong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.627-632
    • /
    • 2021
  • ITL (information-theoretic learning) has been applied successfully to adaptive signal processing and machine learning applications, but there are difficulties in deciding the kernel size, which has a great impact on the system performance. The correntropy algorithm, one of the ITL methods, has superior properties of impulsive-noise robustness and channel-distortion compensation. On the other hand, it is also sensitive to the kernel sizes that can lead to system instability. In this paper, considering the sensitivity of the kernel size cubed in the denominator of the cost function slope, a new adaptive kernel estimation method using the rate of change in error power in respect to the kernel size variation is proposed for the correntropy algorithm. In a distortion-compensation experiment for impulsive-noise and multipath-distorted channel, the performance of the proposed kernel-adjusted correntropy algorithm was examined. The proposed method shows a two times faster convergence speed than the conventional algorithm with a fixed kernel size. In addition, the proposed algorithm converged appropriately for kernel sizes ranging from 2.0 to 6.0. Hence, the proposed method has a wide acceptable margin of initial kernel sizes.

A Study on the Complex-Channel Blind Equalization Using ITL Algorithms

  • Kim, Nam-Yong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.8A
    • /
    • pp.760-767
    • /
    • 2010
  • For complex channel blind equalization, this study presents the performance and characteristics of two complex blind information theoretic learning algorithms (ITL) which are based on minimization of Euclidian distance (ED) between probability density functions compared to constant modulus algorithm which is based on mean squared error (MSE) criterion. The complex-valued ED algorithm employing constant modulus error and the complex-valued ED algorithm using a self-generated symbol set are analyzed to have the fact that the cost function of the latter forces the output signal to have correct symbol values and compensate amplitude and phase distortion simultaneously without any phase compensation process. Simulation results through MSE convergence and constellation comparison for severely distorted complex channels show significantly enhanced performance of symbol-point concentration with no phase rotation.

Step-size Normalization of Information Theoretic Learning Methods based on Random Symbols (랜덤 심볼에 기반한 정보이론적 학습법의 스텝 사이즈 정규화)

  • Kim, Namyong
    • Journal of Internet Computing and Services
    • /
    • v.21 no.2
    • /
    • pp.49-55
    • /
    • 2020
  • Information theoretic learning (ITL) methods based on random symbols (RS) use a set of random symbols generated according to a target distribution and are designed nonparametrically to minimize the cost function of the Euclidian distance between the target distribution and the input distribution. One drawback of the learning method is that it can not utilize the input power statistics by employing a constant stepsize for updating the algorithm. In this paper, it is revealed that firstly, information potential input (IPI) plays a role of input in the cost function-derivative related with information potential output (IPO) and secondly, input itself does in the derivative related with information potential error (IPE). Based on these observations, it is proposed to normalize the step-size with the statistically varying power of the two different inputs, IPI and input itself. The proposed algorithm in an communication environment of impulsive noise and multipath fading shows that the performance of mean squared error (MSE) is lower by 4dB, and convergence speed is 2 times faster than the conventional methods without step-size normalization.

Euclidian Distance Minimization of Probability Density Functions for Blind Equalization

  • Kim, Nam-Yong
    • Journal of Communications and Networks
    • /
    • v.12 no.5
    • /
    • pp.399-405
    • /
    • 2010
  • Blind equalization techniques have been used in broadcast and multipoint communications. In this paper, two criteria of minimizing Euclidian distance between two probability density functions (PDFs) for adaptive blind equalizers are presented. For PDF calculation, Parzen window estimator is used. One criterion is to use a set of randomly generated desired symbols at the receiver so that PDF of the generated symbols matches that of the transmitted symbols. The second method is to use a set of Dirac delta functions in place of the PDF of the transmitted symbols. From the simulation results, the proposed methods significantly outperform the constant modulus algorithm in multipath channel environments.

Blind Equalizer Algorithms using Random Symbols and Decision Feedback (랜덤 심볼열과 결정 궤환을 사용한 자력 등화 알고리듬)

  • Kim, Nam-Yong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.1
    • /
    • pp.343-347
    • /
    • 2012
  • Non-linear equalization techniques using decision feedback structure are highly demanded for cancellation of intersymbol interferences occurred in severe channel environments. In this paper decision feedback structure is applied to the linear blind equalizer algorithm that is based on information theoretic learning and a randomly generated symbol set. At the decision feedback equalizer (DFE) the random symbols are generated to have the same probability density function (PDF) as that of the transmitted symbols. By minimizing difference between the PDF of blind DFE output and that of randomly generated symbols, the proposed DFE algorithm produces equalized output signal. From the simulation results, the proposed method has shown enhanced convergence and error performance compared to its linear counterpart.

A Study on the Minimum Error Entropy - related Criteria for Blind Equalization (블라인드 등화를 위한 최소 에러 엔트로피 성능기준들에 관한 연구)

  • Kim, Namyong;Kwon, Kihyun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.87-95
    • /
    • 2009
  • As information theoretic learning techniques, error entropy minimization criterion (MEE) and maximum cross correntropy criterion (MCC) have been studied in depth for supervised learning. MEE criterion leads to maximization of information potential and MCC criterion leads to maximization of cross correlation between output and input random processes. The weighted combination scheme of these two criteria, namely, minimization of Error Entropy with Fiducial points (MEEF) has been introduced and developed by many researchers. As an approach to unsupervised, blind channel equalization, we investigate the possibility of applying constant modulus error (CME) to MEE criterion and some problems of the method. Also we study on the application of CME to MEEF for blind equalization and find out that MEE-CME loses the information of the constant modulus. This leads MEE-CME and MEEF-CME not to converge or to converge slower than other algorithms dependent on the constant modulus.

  • PDF